Dec  3 15:34:22 np0005544708 kernel: Linux version 5.14.0-645.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025
Dec  3 15:34:22 np0005544708 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec  3 15:34:22 np0005544708 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  3 15:34:22 np0005544708 kernel: BIOS-provided physical RAM map:
Dec  3 15:34:22 np0005544708 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec  3 15:34:22 np0005544708 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec  3 15:34:22 np0005544708 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec  3 15:34:22 np0005544708 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Dec  3 15:34:22 np0005544708 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Dec  3 15:34:22 np0005544708 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec  3 15:34:22 np0005544708 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec  3 15:34:22 np0005544708 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Dec  3 15:34:22 np0005544708 kernel: NX (Execute Disable) protection: active
Dec  3 15:34:22 np0005544708 kernel: APIC: Static calls initialized
Dec  3 15:34:22 np0005544708 kernel: SMBIOS 2.8 present.
Dec  3 15:34:22 np0005544708 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec  3 15:34:22 np0005544708 kernel: Hypervisor detected: KVM
Dec  3 15:34:22 np0005544708 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec  3 15:34:22 np0005544708 kernel: kvm-clock: using sched offset of 3274772429 cycles
Dec  3 15:34:22 np0005544708 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec  3 15:34:22 np0005544708 kernel: tsc: Detected 2799.998 MHz processor
Dec  3 15:34:22 np0005544708 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Dec  3 15:34:22 np0005544708 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Dec  3 15:34:22 np0005544708 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec  3 15:34:22 np0005544708 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Dec  3 15:34:22 np0005544708 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Dec  3 15:34:22 np0005544708 kernel: Using GB pages for direct mapping
Dec  3 15:34:22 np0005544708 kernel: RAMDISK: [mem 0x2d472000-0x32a30fff]
Dec  3 15:34:22 np0005544708 kernel: ACPI: Early table checksum verification disabled
Dec  3 15:34:22 np0005544708 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Dec  3 15:34:22 np0005544708 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  3 15:34:22 np0005544708 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  3 15:34:22 np0005544708 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  3 15:34:22 np0005544708 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Dec  3 15:34:22 np0005544708 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  3 15:34:22 np0005544708 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  3 15:34:22 np0005544708 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Dec  3 15:34:22 np0005544708 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Dec  3 15:34:22 np0005544708 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Dec  3 15:34:22 np0005544708 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Dec  3 15:34:22 np0005544708 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Dec  3 15:34:22 np0005544708 kernel: No NUMA configuration found
Dec  3 15:34:22 np0005544708 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Dec  3 15:34:22 np0005544708 kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Dec  3 15:34:22 np0005544708 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Dec  3 15:34:22 np0005544708 kernel: Zone ranges:
Dec  3 15:34:22 np0005544708 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec  3 15:34:22 np0005544708 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec  3 15:34:22 np0005544708 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Dec  3 15:34:22 np0005544708 kernel:  Device   empty
Dec  3 15:34:22 np0005544708 kernel: Movable zone start for each node
Dec  3 15:34:22 np0005544708 kernel: Early memory node ranges
Dec  3 15:34:22 np0005544708 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec  3 15:34:22 np0005544708 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Dec  3 15:34:22 np0005544708 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Dec  3 15:34:22 np0005544708 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Dec  3 15:34:22 np0005544708 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec  3 15:34:22 np0005544708 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec  3 15:34:22 np0005544708 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec  3 15:34:22 np0005544708 kernel: ACPI: PM-Timer IO Port: 0x608
Dec  3 15:34:22 np0005544708 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec  3 15:34:22 np0005544708 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec  3 15:34:22 np0005544708 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec  3 15:34:22 np0005544708 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec  3 15:34:22 np0005544708 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec  3 15:34:22 np0005544708 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec  3 15:34:22 np0005544708 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec  3 15:34:22 np0005544708 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec  3 15:34:22 np0005544708 kernel: TSC deadline timer available
Dec  3 15:34:22 np0005544708 kernel: CPU topo: Max. logical packages:   8
Dec  3 15:34:22 np0005544708 kernel: CPU topo: Max. logical dies:       8
Dec  3 15:34:22 np0005544708 kernel: CPU topo: Max. dies per package:   1
Dec  3 15:34:22 np0005544708 kernel: CPU topo: Max. threads per core:   1
Dec  3 15:34:22 np0005544708 kernel: CPU topo: Num. cores per package:     1
Dec  3 15:34:22 np0005544708 kernel: CPU topo: Num. threads per package:   1
Dec  3 15:34:22 np0005544708 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Dec  3 15:34:22 np0005544708 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Dec  3 15:34:22 np0005544708 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec  3 15:34:22 np0005544708 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec  3 15:34:22 np0005544708 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec  3 15:34:22 np0005544708 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec  3 15:34:22 np0005544708 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Dec  3 15:34:22 np0005544708 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Dec  3 15:34:22 np0005544708 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec  3 15:34:22 np0005544708 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec  3 15:34:22 np0005544708 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec  3 15:34:22 np0005544708 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Dec  3 15:34:22 np0005544708 kernel: Booting paravirtualized kernel on KVM
Dec  3 15:34:22 np0005544708 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec  3 15:34:22 np0005544708 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec  3 15:34:22 np0005544708 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Dec  3 15:34:22 np0005544708 kernel: kvm-guest: PV spinlocks disabled, no host support
Dec  3 15:34:22 np0005544708 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  3 15:34:22 np0005544708 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64", will be passed to user space.
Dec  3 15:34:22 np0005544708 kernel: random: crng init done
Dec  3 15:34:22 np0005544708 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec  3 15:34:22 np0005544708 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec  3 15:34:22 np0005544708 kernel: Fallback order for Node 0: 0 
Dec  3 15:34:22 np0005544708 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Dec  3 15:34:22 np0005544708 kernel: Policy zone: Normal
Dec  3 15:34:22 np0005544708 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec  3 15:34:22 np0005544708 kernel: software IO TLB: area num 8.
Dec  3 15:34:22 np0005544708 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Dec  3 15:34:22 np0005544708 kernel: ftrace: allocating 49335 entries in 193 pages
Dec  3 15:34:22 np0005544708 kernel: ftrace: allocated 193 pages with 3 groups
Dec  3 15:34:22 np0005544708 kernel: Dynamic Preempt: voluntary
Dec  3 15:34:22 np0005544708 kernel: rcu: Preemptible hierarchical RCU implementation.
Dec  3 15:34:22 np0005544708 kernel: rcu: #011RCU event tracing is enabled.
Dec  3 15:34:22 np0005544708 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Dec  3 15:34:22 np0005544708 kernel: #011Trampoline variant of Tasks RCU enabled.
Dec  3 15:34:22 np0005544708 kernel: #011Rude variant of Tasks RCU enabled.
Dec  3 15:34:22 np0005544708 kernel: #011Tracing variant of Tasks RCU enabled.
Dec  3 15:34:22 np0005544708 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec  3 15:34:22 np0005544708 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Dec  3 15:34:22 np0005544708 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  3 15:34:22 np0005544708 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  3 15:34:22 np0005544708 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  3 15:34:22 np0005544708 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Dec  3 15:34:22 np0005544708 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec  3 15:34:22 np0005544708 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec  3 15:34:22 np0005544708 kernel: Console: colour VGA+ 80x25
Dec  3 15:34:22 np0005544708 kernel: printk: console [ttyS0] enabled
Dec  3 15:34:22 np0005544708 kernel: ACPI: Core revision 20230331
Dec  3 15:34:22 np0005544708 kernel: APIC: Switch to symmetric I/O mode setup
Dec  3 15:34:22 np0005544708 kernel: x2apic enabled
Dec  3 15:34:22 np0005544708 kernel: APIC: Switched APIC routing to: physical x2apic
Dec  3 15:34:22 np0005544708 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec  3 15:34:22 np0005544708 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Dec  3 15:34:22 np0005544708 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec  3 15:34:22 np0005544708 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec  3 15:34:22 np0005544708 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec  3 15:34:22 np0005544708 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec  3 15:34:22 np0005544708 kernel: Spectre V2 : Mitigation: Retpolines
Dec  3 15:34:22 np0005544708 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Dec  3 15:34:22 np0005544708 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec  3 15:34:22 np0005544708 kernel: RETBleed: Mitigation: untrained return thunk
Dec  3 15:34:22 np0005544708 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec  3 15:34:22 np0005544708 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec  3 15:34:22 np0005544708 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Dec  3 15:34:22 np0005544708 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Dec  3 15:34:22 np0005544708 kernel: x86/bugs: return thunk changed
Dec  3 15:34:22 np0005544708 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Dec  3 15:34:22 np0005544708 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec  3 15:34:22 np0005544708 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec  3 15:34:22 np0005544708 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec  3 15:34:22 np0005544708 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec  3 15:34:22 np0005544708 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Dec  3 15:34:22 np0005544708 kernel: Freeing SMP alternatives memory: 40K
Dec  3 15:34:22 np0005544708 kernel: pid_max: default: 32768 minimum: 301
Dec  3 15:34:22 np0005544708 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Dec  3 15:34:22 np0005544708 kernel: landlock: Up and running.
Dec  3 15:34:22 np0005544708 kernel: Yama: becoming mindful.
Dec  3 15:34:22 np0005544708 kernel: SELinux:  Initializing.
Dec  3 15:34:22 np0005544708 kernel: LSM support for eBPF active
Dec  3 15:34:22 np0005544708 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec  3 15:34:22 np0005544708 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec  3 15:34:22 np0005544708 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec  3 15:34:22 np0005544708 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec  3 15:34:22 np0005544708 kernel: ... version:                0
Dec  3 15:34:22 np0005544708 kernel: ... bit width:              48
Dec  3 15:34:22 np0005544708 kernel: ... generic registers:      6
Dec  3 15:34:22 np0005544708 kernel: ... value mask:             0000ffffffffffff
Dec  3 15:34:22 np0005544708 kernel: ... max period:             00007fffffffffff
Dec  3 15:34:22 np0005544708 kernel: ... fixed-purpose events:   0
Dec  3 15:34:22 np0005544708 kernel: ... event mask:             000000000000003f
Dec  3 15:34:22 np0005544708 kernel: signal: max sigframe size: 1776
Dec  3 15:34:22 np0005544708 kernel: rcu: Hierarchical SRCU implementation.
Dec  3 15:34:22 np0005544708 kernel: rcu: #011Max phase no-delay instances is 400.
Dec  3 15:34:22 np0005544708 kernel: smp: Bringing up secondary CPUs ...
Dec  3 15:34:22 np0005544708 kernel: smpboot: x86: Booting SMP configuration:
Dec  3 15:34:22 np0005544708 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Dec  3 15:34:22 np0005544708 kernel: smp: Brought up 1 node, 8 CPUs
Dec  3 15:34:22 np0005544708 kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Dec  3 15:34:22 np0005544708 kernel: node 0 deferred pages initialised in 9ms
Dec  3 15:34:22 np0005544708 kernel: Memory: 7763932K/8388068K available (16384K kernel code, 5795K rwdata, 13908K rodata, 4196K init, 7156K bss, 618208K reserved, 0K cma-reserved)
Dec  3 15:34:22 np0005544708 kernel: devtmpfs: initialized
Dec  3 15:34:22 np0005544708 kernel: x86/mm: Memory block size: 128MB
Dec  3 15:34:22 np0005544708 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec  3 15:34:22 np0005544708 kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Dec  3 15:34:22 np0005544708 kernel: pinctrl core: initialized pinctrl subsystem
Dec  3 15:34:22 np0005544708 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec  3 15:34:22 np0005544708 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Dec  3 15:34:22 np0005544708 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec  3 15:34:22 np0005544708 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec  3 15:34:22 np0005544708 kernel: audit: initializing netlink subsys (disabled)
Dec  3 15:34:22 np0005544708 kernel: audit: type=2000 audit(1764794060.641:1): state=initialized audit_enabled=0 res=1
Dec  3 15:34:22 np0005544708 kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec  3 15:34:22 np0005544708 kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec  3 15:34:22 np0005544708 kernel: thermal_sys: Registered thermal governor 'user_space'
Dec  3 15:34:22 np0005544708 kernel: cpuidle: using governor menu
Dec  3 15:34:22 np0005544708 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec  3 15:34:22 np0005544708 kernel: PCI: Using configuration type 1 for base access
Dec  3 15:34:22 np0005544708 kernel: PCI: Using configuration type 1 for extended access
Dec  3 15:34:22 np0005544708 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec  3 15:34:22 np0005544708 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec  3 15:34:22 np0005544708 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec  3 15:34:22 np0005544708 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec  3 15:34:22 np0005544708 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec  3 15:34:22 np0005544708 kernel: Demotion targets for Node 0: null
Dec  3 15:34:22 np0005544708 kernel: cryptd: max_cpu_qlen set to 1000
Dec  3 15:34:22 np0005544708 kernel: ACPI: Added _OSI(Module Device)
Dec  3 15:34:22 np0005544708 kernel: ACPI: Added _OSI(Processor Device)
Dec  3 15:34:22 np0005544708 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec  3 15:34:22 np0005544708 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec  3 15:34:22 np0005544708 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec  3 15:34:22 np0005544708 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec  3 15:34:22 np0005544708 kernel: ACPI: Interpreter enabled
Dec  3 15:34:22 np0005544708 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Dec  3 15:34:22 np0005544708 kernel: ACPI: Using IOAPIC for interrupt routing
Dec  3 15:34:22 np0005544708 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec  3 15:34:22 np0005544708 kernel: PCI: Using E820 reservations for host bridge windows
Dec  3 15:34:22 np0005544708 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec  3 15:34:22 np0005544708 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec  3 15:34:22 np0005544708 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec  3 15:34:22 np0005544708 kernel: acpiphp: Slot [3] registered
Dec  3 15:34:22 np0005544708 kernel: acpiphp: Slot [4] registered
Dec  3 15:34:22 np0005544708 kernel: acpiphp: Slot [5] registered
Dec  3 15:34:22 np0005544708 kernel: acpiphp: Slot [6] registered
Dec  3 15:34:22 np0005544708 kernel: acpiphp: Slot [7] registered
Dec  3 15:34:22 np0005544708 kernel: acpiphp: Slot [8] registered
Dec  3 15:34:22 np0005544708 kernel: acpiphp: Slot [9] registered
Dec  3 15:34:22 np0005544708 kernel: acpiphp: Slot [10] registered
Dec  3 15:34:22 np0005544708 kernel: acpiphp: Slot [11] registered
Dec  3 15:34:22 np0005544708 kernel: acpiphp: Slot [12] registered
Dec  3 15:34:22 np0005544708 kernel: acpiphp: Slot [13] registered
Dec  3 15:34:22 np0005544708 kernel: acpiphp: Slot [14] registered
Dec  3 15:34:22 np0005544708 kernel: acpiphp: Slot [15] registered
Dec  3 15:34:22 np0005544708 kernel: acpiphp: Slot [16] registered
Dec  3 15:34:22 np0005544708 kernel: acpiphp: Slot [17] registered
Dec  3 15:34:22 np0005544708 kernel: acpiphp: Slot [18] registered
Dec  3 15:34:22 np0005544708 kernel: acpiphp: Slot [19] registered
Dec  3 15:34:22 np0005544708 kernel: acpiphp: Slot [20] registered
Dec  3 15:34:22 np0005544708 kernel: acpiphp: Slot [21] registered
Dec  3 15:34:22 np0005544708 kernel: acpiphp: Slot [22] registered
Dec  3 15:34:22 np0005544708 kernel: acpiphp: Slot [23] registered
Dec  3 15:34:22 np0005544708 kernel: acpiphp: Slot [24] registered
Dec  3 15:34:22 np0005544708 kernel: acpiphp: Slot [25] registered
Dec  3 15:34:22 np0005544708 kernel: acpiphp: Slot [26] registered
Dec  3 15:34:22 np0005544708 kernel: acpiphp: Slot [27] registered
Dec  3 15:34:22 np0005544708 kernel: acpiphp: Slot [28] registered
Dec  3 15:34:22 np0005544708 kernel: acpiphp: Slot [29] registered
Dec  3 15:34:22 np0005544708 kernel: acpiphp: Slot [30] registered
Dec  3 15:34:22 np0005544708 kernel: acpiphp: Slot [31] registered
Dec  3 15:34:22 np0005544708 kernel: PCI host bridge to bus 0000:00
Dec  3 15:34:22 np0005544708 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec  3 15:34:22 np0005544708 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec  3 15:34:22 np0005544708 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec  3 15:34:22 np0005544708 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec  3 15:34:22 np0005544708 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Dec  3 15:34:22 np0005544708 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec  3 15:34:22 np0005544708 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Dec  3 15:34:22 np0005544708 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Dec  3 15:34:22 np0005544708 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Dec  3 15:34:22 np0005544708 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Dec  3 15:34:22 np0005544708 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Dec  3 15:34:22 np0005544708 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Dec  3 15:34:22 np0005544708 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Dec  3 15:34:22 np0005544708 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Dec  3 15:34:22 np0005544708 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Dec  3 15:34:22 np0005544708 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Dec  3 15:34:22 np0005544708 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Dec  3 15:34:22 np0005544708 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec  3 15:34:22 np0005544708 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec  3 15:34:22 np0005544708 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Dec  3 15:34:22 np0005544708 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Dec  3 15:34:22 np0005544708 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Dec  3 15:34:22 np0005544708 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Dec  3 15:34:22 np0005544708 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Dec  3 15:34:22 np0005544708 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec  3 15:34:22 np0005544708 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec  3 15:34:22 np0005544708 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Dec  3 15:34:22 np0005544708 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Dec  3 15:34:22 np0005544708 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Dec  3 15:34:22 np0005544708 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Dec  3 15:34:22 np0005544708 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Dec  3 15:34:22 np0005544708 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Dec  3 15:34:22 np0005544708 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Dec  3 15:34:22 np0005544708 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec  3 15:34:22 np0005544708 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Dec  3 15:34:22 np0005544708 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Dec  3 15:34:22 np0005544708 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec  3 15:34:22 np0005544708 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Dec  3 15:34:22 np0005544708 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Dec  3 15:34:22 np0005544708 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Dec  3 15:34:22 np0005544708 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec  3 15:34:22 np0005544708 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec  3 15:34:22 np0005544708 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec  3 15:34:22 np0005544708 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec  3 15:34:22 np0005544708 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec  3 15:34:22 np0005544708 kernel: iommu: Default domain type: Translated
Dec  3 15:34:22 np0005544708 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec  3 15:34:22 np0005544708 kernel: SCSI subsystem initialized
Dec  3 15:34:22 np0005544708 kernel: ACPI: bus type USB registered
Dec  3 15:34:22 np0005544708 kernel: usbcore: registered new interface driver usbfs
Dec  3 15:34:22 np0005544708 kernel: usbcore: registered new interface driver hub
Dec  3 15:34:22 np0005544708 kernel: usbcore: registered new device driver usb
Dec  3 15:34:22 np0005544708 kernel: pps_core: LinuxPPS API ver. 1 registered
Dec  3 15:34:22 np0005544708 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec  3 15:34:22 np0005544708 kernel: PTP clock support registered
Dec  3 15:34:22 np0005544708 kernel: EDAC MC: Ver: 3.0.0
Dec  3 15:34:22 np0005544708 kernel: NetLabel: Initializing
Dec  3 15:34:22 np0005544708 kernel: NetLabel:  domain hash size = 128
Dec  3 15:34:22 np0005544708 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec  3 15:34:22 np0005544708 kernel: NetLabel:  unlabeled traffic allowed by default
Dec  3 15:34:22 np0005544708 kernel: PCI: Using ACPI for IRQ routing
Dec  3 15:34:22 np0005544708 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec  3 15:34:22 np0005544708 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec  3 15:34:22 np0005544708 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec  3 15:34:22 np0005544708 kernel: vgaarb: loaded
Dec  3 15:34:22 np0005544708 kernel: clocksource: Switched to clocksource kvm-clock
Dec  3 15:34:22 np0005544708 kernel: VFS: Disk quotas dquot_6.6.0
Dec  3 15:34:22 np0005544708 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec  3 15:34:22 np0005544708 kernel: pnp: PnP ACPI init
Dec  3 15:34:22 np0005544708 kernel: pnp: PnP ACPI: found 5 devices
Dec  3 15:34:22 np0005544708 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec  3 15:34:22 np0005544708 kernel: NET: Registered PF_INET protocol family
Dec  3 15:34:22 np0005544708 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec  3 15:34:22 np0005544708 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec  3 15:34:22 np0005544708 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec  3 15:34:22 np0005544708 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec  3 15:34:22 np0005544708 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec  3 15:34:22 np0005544708 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec  3 15:34:22 np0005544708 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Dec  3 15:34:22 np0005544708 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec  3 15:34:22 np0005544708 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec  3 15:34:22 np0005544708 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec  3 15:34:22 np0005544708 kernel: NET: Registered PF_XDP protocol family
Dec  3 15:34:22 np0005544708 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec  3 15:34:22 np0005544708 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec  3 15:34:22 np0005544708 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec  3 15:34:22 np0005544708 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Dec  3 15:34:22 np0005544708 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Dec  3 15:34:22 np0005544708 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec  3 15:34:22 np0005544708 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec  3 15:34:22 np0005544708 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec  3 15:34:22 np0005544708 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 71614 usecs
Dec  3 15:34:22 np0005544708 kernel: PCI: CLS 0 bytes, default 64
Dec  3 15:34:22 np0005544708 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec  3 15:34:22 np0005544708 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Dec  3 15:34:22 np0005544708 kernel: Trying to unpack rootfs image as initramfs...
Dec  3 15:34:22 np0005544708 kernel: ACPI: bus type thunderbolt registered
Dec  3 15:34:22 np0005544708 kernel: Initialise system trusted keyrings
Dec  3 15:34:22 np0005544708 kernel: Key type blacklist registered
Dec  3 15:34:22 np0005544708 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Dec  3 15:34:22 np0005544708 kernel: zbud: loaded
Dec  3 15:34:22 np0005544708 kernel: integrity: Platform Keyring initialized
Dec  3 15:34:22 np0005544708 kernel: integrity: Machine keyring initialized
Dec  3 15:34:22 np0005544708 kernel: Freeing initrd memory: 87804K
Dec  3 15:34:22 np0005544708 kernel: NET: Registered PF_ALG protocol family
Dec  3 15:34:22 np0005544708 kernel: xor: automatically using best checksumming function   avx       
Dec  3 15:34:22 np0005544708 kernel: Key type asymmetric registered
Dec  3 15:34:22 np0005544708 kernel: Asymmetric key parser 'x509' registered
Dec  3 15:34:22 np0005544708 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec  3 15:34:22 np0005544708 kernel: io scheduler mq-deadline registered
Dec  3 15:34:22 np0005544708 kernel: io scheduler kyber registered
Dec  3 15:34:22 np0005544708 kernel: io scheduler bfq registered
Dec  3 15:34:22 np0005544708 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec  3 15:34:22 np0005544708 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec  3 15:34:22 np0005544708 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec  3 15:34:22 np0005544708 kernel: ACPI: button: Power Button [PWRF]
Dec  3 15:34:22 np0005544708 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec  3 15:34:22 np0005544708 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec  3 15:34:22 np0005544708 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec  3 15:34:22 np0005544708 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec  3 15:34:22 np0005544708 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec  3 15:34:22 np0005544708 kernel: Non-volatile memory driver v1.3
Dec  3 15:34:22 np0005544708 kernel: rdac: device handler registered
Dec  3 15:34:22 np0005544708 kernel: hp_sw: device handler registered
Dec  3 15:34:22 np0005544708 kernel: emc: device handler registered
Dec  3 15:34:22 np0005544708 kernel: alua: device handler registered
Dec  3 15:34:22 np0005544708 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec  3 15:34:22 np0005544708 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec  3 15:34:22 np0005544708 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec  3 15:34:22 np0005544708 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Dec  3 15:34:22 np0005544708 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec  3 15:34:22 np0005544708 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec  3 15:34:22 np0005544708 kernel: usb usb1: Product: UHCI Host Controller
Dec  3 15:34:22 np0005544708 kernel: usb usb1: Manufacturer: Linux 5.14.0-645.el9.x86_64 uhci_hcd
Dec  3 15:34:22 np0005544708 kernel: usb usb1: SerialNumber: 0000:00:01.2
Dec  3 15:34:22 np0005544708 kernel: hub 1-0:1.0: USB hub found
Dec  3 15:34:22 np0005544708 kernel: hub 1-0:1.0: 2 ports detected
Dec  3 15:34:22 np0005544708 kernel: usbcore: registered new interface driver usbserial_generic
Dec  3 15:34:22 np0005544708 kernel: usbserial: USB Serial support registered for generic
Dec  3 15:34:22 np0005544708 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec  3 15:34:22 np0005544708 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec  3 15:34:22 np0005544708 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec  3 15:34:22 np0005544708 kernel: mousedev: PS/2 mouse device common for all mice
Dec  3 15:34:22 np0005544708 kernel: rtc_cmos 00:04: RTC can wake from S4
Dec  3 15:34:22 np0005544708 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec  3 15:34:22 np0005544708 kernel: rtc_cmos 00:04: registered as rtc0
Dec  3 15:34:22 np0005544708 kernel: rtc_cmos 00:04: setting system clock to 2025-12-03T20:34:21 UTC (1764794061)
Dec  3 15:34:22 np0005544708 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec  3 15:34:22 np0005544708 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Dec  3 15:34:22 np0005544708 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec  3 15:34:22 np0005544708 kernel: hid: raw HID events driver (C) Jiri Kosina
Dec  3 15:34:22 np0005544708 kernel: usbcore: registered new interface driver usbhid
Dec  3 15:34:22 np0005544708 kernel: usbhid: USB HID core driver
Dec  3 15:34:22 np0005544708 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec  3 15:34:22 np0005544708 kernel: drop_monitor: Initializing network drop monitor service
Dec  3 15:34:22 np0005544708 kernel: Initializing XFRM netlink socket
Dec  3 15:34:22 np0005544708 kernel: NET: Registered PF_INET6 protocol family
Dec  3 15:34:22 np0005544708 kernel: Segment Routing with IPv6
Dec  3 15:34:22 np0005544708 kernel: NET: Registered PF_PACKET protocol family
Dec  3 15:34:22 np0005544708 kernel: mpls_gso: MPLS GSO support
Dec  3 15:34:22 np0005544708 kernel: IPI shorthand broadcast: enabled
Dec  3 15:34:22 np0005544708 kernel: AVX2 version of gcm_enc/dec engaged.
Dec  3 15:34:22 np0005544708 kernel: AES CTR mode by8 optimization enabled
Dec  3 15:34:22 np0005544708 kernel: sched_clock: Marking stable (1169006450, 151021234)->(1458226480, -138198796)
Dec  3 15:34:22 np0005544708 kernel: registered taskstats version 1
Dec  3 15:34:22 np0005544708 kernel: Loading compiled-in X.509 certificates
Dec  3 15:34:22 np0005544708 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec  3 15:34:22 np0005544708 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec  3 15:34:22 np0005544708 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec  3 15:34:22 np0005544708 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Dec  3 15:34:22 np0005544708 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Dec  3 15:34:22 np0005544708 kernel: Demotion targets for Node 0: null
Dec  3 15:34:22 np0005544708 kernel: page_owner is disabled
Dec  3 15:34:22 np0005544708 kernel: Key type .fscrypt registered
Dec  3 15:34:22 np0005544708 kernel: Key type fscrypt-provisioning registered
Dec  3 15:34:22 np0005544708 kernel: Key type big_key registered
Dec  3 15:34:22 np0005544708 kernel: Key type encrypted registered
Dec  3 15:34:22 np0005544708 kernel: ima: No TPM chip found, activating TPM-bypass!
Dec  3 15:34:22 np0005544708 kernel: Loading compiled-in module X.509 certificates
Dec  3 15:34:22 np0005544708 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec  3 15:34:22 np0005544708 kernel: ima: Allocated hash algorithm: sha256
Dec  3 15:34:22 np0005544708 kernel: ima: No architecture policies found
Dec  3 15:34:22 np0005544708 kernel: evm: Initialising EVM extended attributes:
Dec  3 15:34:22 np0005544708 kernel: evm: security.selinux
Dec  3 15:34:22 np0005544708 kernel: evm: security.SMACK64 (disabled)
Dec  3 15:34:22 np0005544708 kernel: evm: security.SMACK64EXEC (disabled)
Dec  3 15:34:22 np0005544708 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec  3 15:34:22 np0005544708 kernel: evm: security.SMACK64MMAP (disabled)
Dec  3 15:34:22 np0005544708 kernel: evm: security.apparmor (disabled)
Dec  3 15:34:22 np0005544708 kernel: evm: security.ima
Dec  3 15:34:22 np0005544708 kernel: evm: security.capability
Dec  3 15:34:22 np0005544708 kernel: evm: HMAC attrs: 0x1
Dec  3 15:34:22 np0005544708 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec  3 15:34:22 np0005544708 kernel: Running certificate verification RSA selftest
Dec  3 15:34:22 np0005544708 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec  3 15:34:22 np0005544708 kernel: Running certificate verification ECDSA selftest
Dec  3 15:34:22 np0005544708 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Dec  3 15:34:22 np0005544708 kernel: clk: Disabling unused clocks
Dec  3 15:34:22 np0005544708 kernel: Freeing unused decrypted memory: 2028K
Dec  3 15:34:22 np0005544708 kernel: Freeing unused kernel image (initmem) memory: 4196K
Dec  3 15:34:22 np0005544708 kernel: Write protecting the kernel read-only data: 30720k
Dec  3 15:34:22 np0005544708 kernel: Freeing unused kernel image (rodata/data gap) memory: 428K
Dec  3 15:34:22 np0005544708 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec  3 15:34:22 np0005544708 kernel: Run /init as init process
Dec  3 15:34:22 np0005544708 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec  3 15:34:22 np0005544708 systemd: Detected virtualization kvm.
Dec  3 15:34:22 np0005544708 systemd: Detected architecture x86-64.
Dec  3 15:34:22 np0005544708 systemd: Running in initrd.
Dec  3 15:34:22 np0005544708 systemd: No hostname configured, using default hostname.
Dec  3 15:34:22 np0005544708 systemd: Hostname set to <localhost>.
Dec  3 15:34:22 np0005544708 systemd: Initializing machine ID from VM UUID.
Dec  3 15:34:22 np0005544708 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec  3 15:34:22 np0005544708 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec  3 15:34:22 np0005544708 kernel: usb 1-1: Product: QEMU USB Tablet
Dec  3 15:34:22 np0005544708 kernel: usb 1-1: Manufacturer: QEMU
Dec  3 15:34:22 np0005544708 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Dec  3 15:34:22 np0005544708 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec  3 15:34:22 np0005544708 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Dec  3 15:34:22 np0005544708 systemd: Queued start job for default target Initrd Default Target.
Dec  3 15:34:22 np0005544708 systemd: Started Dispatch Password Requests to Console Directory Watch.
Dec  3 15:34:22 np0005544708 systemd: Reached target Local Encrypted Volumes.
Dec  3 15:34:22 np0005544708 systemd: Reached target Initrd /usr File System.
Dec  3 15:34:22 np0005544708 systemd: Reached target Local File Systems.
Dec  3 15:34:22 np0005544708 systemd: Reached target Path Units.
Dec  3 15:34:22 np0005544708 systemd: Reached target Slice Units.
Dec  3 15:34:22 np0005544708 systemd: Reached target Swaps.
Dec  3 15:34:22 np0005544708 systemd: Reached target Timer Units.
Dec  3 15:34:22 np0005544708 systemd: Listening on D-Bus System Message Bus Socket.
Dec  3 15:34:22 np0005544708 systemd: Listening on Journal Socket (/dev/log).
Dec  3 15:34:22 np0005544708 systemd: Listening on Journal Socket.
Dec  3 15:34:22 np0005544708 systemd: Listening on udev Control Socket.
Dec  3 15:34:22 np0005544708 systemd: Listening on udev Kernel Socket.
Dec  3 15:34:22 np0005544708 systemd: Reached target Socket Units.
Dec  3 15:34:22 np0005544708 systemd: Starting Create List of Static Device Nodes...
Dec  3 15:34:22 np0005544708 systemd: Starting Journal Service...
Dec  3 15:34:22 np0005544708 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec  3 15:34:22 np0005544708 systemd: Starting Apply Kernel Variables...
Dec  3 15:34:22 np0005544708 systemd: Starting Create System Users...
Dec  3 15:34:22 np0005544708 systemd: Starting Setup Virtual Console...
Dec  3 15:34:22 np0005544708 systemd: Finished Create List of Static Device Nodes.
Dec  3 15:34:22 np0005544708 systemd: Finished Apply Kernel Variables.
Dec  3 15:34:22 np0005544708 systemd: Finished Create System Users.
Dec  3 15:34:22 np0005544708 systemd-journald[307]: Journal started
Dec  3 15:34:22 np0005544708 systemd-journald[307]: Runtime Journal (/run/log/journal/fe8087480a274a3c9875a9777da5fa17) is 8.0M, max 153.6M, 145.6M free.
Dec  3 15:34:22 np0005544708 systemd-sysusers[311]: Creating group 'users' with GID 100.
Dec  3 15:34:22 np0005544708 systemd-sysusers[311]: Creating group 'dbus' with GID 81.
Dec  3 15:34:22 np0005544708 systemd-sysusers[311]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec  3 15:34:22 np0005544708 systemd: Started Journal Service.
Dec  3 15:34:22 np0005544708 systemd[1]: Starting Create Static Device Nodes in /dev...
Dec  3 15:34:22 np0005544708 systemd[1]: Starting Create Volatile Files and Directories...
Dec  3 15:34:22 np0005544708 systemd[1]: Finished Create Static Device Nodes in /dev.
Dec  3 15:34:22 np0005544708 systemd[1]: Finished Create Volatile Files and Directories.
Dec  3 15:34:22 np0005544708 systemd[1]: Finished Setup Virtual Console.
Dec  3 15:34:22 np0005544708 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec  3 15:34:22 np0005544708 systemd[1]: Starting dracut cmdline hook...
Dec  3 15:34:22 np0005544708 dracut-cmdline[326]: dracut-9 dracut-057-102.git20250818.el9
Dec  3 15:34:22 np0005544708 dracut-cmdline[326]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  3 15:34:22 np0005544708 systemd[1]: Finished dracut cmdline hook.
Dec  3 15:34:22 np0005544708 systemd[1]: Starting dracut pre-udev hook...
Dec  3 15:34:22 np0005544708 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec  3 15:34:22 np0005544708 kernel: device-mapper: uevent: version 1.0.3
Dec  3 15:34:22 np0005544708 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Dec  3 15:34:22 np0005544708 kernel: RPC: Registered named UNIX socket transport module.
Dec  3 15:34:22 np0005544708 kernel: RPC: Registered udp transport module.
Dec  3 15:34:22 np0005544708 kernel: RPC: Registered tcp transport module.
Dec  3 15:34:22 np0005544708 kernel: RPC: Registered tcp-with-tls transport module.
Dec  3 15:34:22 np0005544708 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec  3 15:34:22 np0005544708 rpc.statd[443]: Version 2.5.4 starting
Dec  3 15:34:22 np0005544708 rpc.statd[443]: Initializing NSM state
Dec  3 15:34:22 np0005544708 rpc.idmapd[448]: Setting log level to 0
Dec  3 15:34:22 np0005544708 systemd[1]: Finished dracut pre-udev hook.
Dec  3 15:34:22 np0005544708 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec  3 15:34:22 np0005544708 systemd-udevd[461]: Using default interface naming scheme 'rhel-9.0'.
Dec  3 15:34:22 np0005544708 systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec  3 15:34:22 np0005544708 systemd[1]: Starting dracut pre-trigger hook...
Dec  3 15:34:22 np0005544708 systemd[1]: Finished dracut pre-trigger hook.
Dec  3 15:34:22 np0005544708 systemd[1]: Starting Coldplug All udev Devices...
Dec  3 15:34:22 np0005544708 systemd[1]: Created slice Slice /system/modprobe.
Dec  3 15:34:22 np0005544708 systemd[1]: Starting Load Kernel Module configfs...
Dec  3 15:34:22 np0005544708 systemd[1]: Finished Coldplug All udev Devices.
Dec  3 15:34:22 np0005544708 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  3 15:34:22 np0005544708 systemd[1]: Finished Load Kernel Module configfs.
Dec  3 15:34:22 np0005544708 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec  3 15:34:22 np0005544708 systemd[1]: Reached target Network.
Dec  3 15:34:22 np0005544708 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec  3 15:34:22 np0005544708 systemd[1]: Starting dracut initqueue hook...
Dec  3 15:34:22 np0005544708 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Dec  3 15:34:22 np0005544708 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Dec  3 15:34:22 np0005544708 kernel: vda: vda1
Dec  3 15:34:22 np0005544708 systemd-udevd[492]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 15:34:22 np0005544708 kernel: scsi host0: ata_piix
Dec  3 15:34:22 np0005544708 kernel: scsi host1: ata_piix
Dec  3 15:34:22 np0005544708 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Dec  3 15:34:22 np0005544708 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Dec  3 15:34:22 np0005544708 systemd[1]: Found device /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec  3 15:34:23 np0005544708 systemd[1]: Reached target Initrd Root Device.
Dec  3 15:34:23 np0005544708 systemd[1]: Mounting Kernel Configuration File System...
Dec  3 15:34:23 np0005544708 kernel: ata1: found unknown device (class 0)
Dec  3 15:34:23 np0005544708 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec  3 15:34:23 np0005544708 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec  3 15:34:23 np0005544708 systemd[1]: Mounted Kernel Configuration File System.
Dec  3 15:34:23 np0005544708 systemd[1]: Reached target System Initialization.
Dec  3 15:34:23 np0005544708 systemd[1]: Reached target Basic System.
Dec  3 15:34:23 np0005544708 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec  3 15:34:23 np0005544708 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec  3 15:34:23 np0005544708 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec  3 15:34:23 np0005544708 systemd[1]: Finished dracut initqueue hook.
Dec  3 15:34:23 np0005544708 systemd[1]: Reached target Preparation for Remote File Systems.
Dec  3 15:34:23 np0005544708 systemd[1]: Reached target Remote Encrypted Volumes.
Dec  3 15:34:23 np0005544708 systemd[1]: Reached target Remote File Systems.
Dec  3 15:34:23 np0005544708 systemd[1]: Starting dracut pre-mount hook...
Dec  3 15:34:23 np0005544708 systemd[1]: Finished dracut pre-mount hook.
Dec  3 15:34:23 np0005544708 systemd[1]: Starting File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f...
Dec  3 15:34:23 np0005544708 systemd-fsck[558]: /usr/sbin/fsck.xfs: XFS file system.
Dec  3 15:34:23 np0005544708 systemd[1]: Finished File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec  3 15:34:23 np0005544708 systemd[1]: Mounting /sysroot...
Dec  3 15:34:23 np0005544708 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec  3 15:34:23 np0005544708 kernel: XFS (vda1): Mounting V5 Filesystem fcf6b761-831a-48a7-9f5f-068b5063763f
Dec  3 15:34:23 np0005544708 kernel: XFS (vda1): Ending clean mount
Dec  3 15:34:23 np0005544708 systemd[1]: Mounted /sysroot.
Dec  3 15:34:23 np0005544708 systemd[1]: Reached target Initrd Root File System.
Dec  3 15:34:23 np0005544708 systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec  3 15:34:23 np0005544708 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec  3 15:34:23 np0005544708 systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec  3 15:34:23 np0005544708 systemd[1]: Reached target Initrd File Systems.
Dec  3 15:34:23 np0005544708 systemd[1]: Reached target Initrd Default Target.
Dec  3 15:34:23 np0005544708 systemd[1]: Starting dracut mount hook...
Dec  3 15:34:23 np0005544708 systemd[1]: Finished dracut mount hook.
Dec  3 15:34:23 np0005544708 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec  3 15:34:24 np0005544708 rpc.idmapd[448]: exiting on signal 15
Dec  3 15:34:24 np0005544708 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec  3 15:34:24 np0005544708 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec  3 15:34:24 np0005544708 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec  3 15:34:24 np0005544708 systemd[1]: Stopped target Network.
Dec  3 15:34:24 np0005544708 systemd[1]: Stopped target Remote Encrypted Volumes.
Dec  3 15:34:24 np0005544708 systemd[1]: Stopped target Timer Units.
Dec  3 15:34:24 np0005544708 systemd[1]: dbus.socket: Deactivated successfully.
Dec  3 15:34:24 np0005544708 systemd[1]: Closed D-Bus System Message Bus Socket.
Dec  3 15:34:24 np0005544708 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec  3 15:34:24 np0005544708 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec  3 15:34:24 np0005544708 systemd[1]: Stopped target Initrd Default Target.
Dec  3 15:34:24 np0005544708 systemd[1]: Stopped target Basic System.
Dec  3 15:34:24 np0005544708 systemd[1]: Stopped target Initrd Root Device.
Dec  3 15:34:24 np0005544708 systemd[1]: Stopped target Initrd /usr File System.
Dec  3 15:34:24 np0005544708 systemd[1]: Stopped target Path Units.
Dec  3 15:34:24 np0005544708 systemd[1]: Stopped target Remote File Systems.
Dec  3 15:34:24 np0005544708 systemd[1]: Stopped target Preparation for Remote File Systems.
Dec  3 15:34:24 np0005544708 systemd[1]: Stopped target Slice Units.
Dec  3 15:34:24 np0005544708 systemd[1]: Stopped target Socket Units.
Dec  3 15:34:24 np0005544708 systemd[1]: Stopped target System Initialization.
Dec  3 15:34:24 np0005544708 systemd[1]: Stopped target Local File Systems.
Dec  3 15:34:24 np0005544708 systemd[1]: Stopped target Swaps.
Dec  3 15:34:24 np0005544708 systemd[1]: dracut-mount.service: Deactivated successfully.
Dec  3 15:34:24 np0005544708 systemd[1]: Stopped dracut mount hook.
Dec  3 15:34:24 np0005544708 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec  3 15:34:24 np0005544708 systemd[1]: Stopped dracut pre-mount hook.
Dec  3 15:34:24 np0005544708 systemd[1]: Stopped target Local Encrypted Volumes.
Dec  3 15:34:24 np0005544708 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec  3 15:34:24 np0005544708 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec  3 15:34:24 np0005544708 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec  3 15:34:24 np0005544708 systemd[1]: Stopped dracut initqueue hook.
Dec  3 15:34:24 np0005544708 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec  3 15:34:24 np0005544708 systemd[1]: Stopped Apply Kernel Variables.
Dec  3 15:34:24 np0005544708 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec  3 15:34:24 np0005544708 systemd[1]: Stopped Create Volatile Files and Directories.
Dec  3 15:34:24 np0005544708 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec  3 15:34:24 np0005544708 systemd[1]: Stopped Coldplug All udev Devices.
Dec  3 15:34:24 np0005544708 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec  3 15:34:24 np0005544708 systemd[1]: Stopped dracut pre-trigger hook.
Dec  3 15:34:24 np0005544708 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec  3 15:34:24 np0005544708 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec  3 15:34:24 np0005544708 systemd[1]: Stopped Setup Virtual Console.
Dec  3 15:34:24 np0005544708 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Dec  3 15:34:24 np0005544708 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec  3 15:34:24 np0005544708 systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec  3 15:34:24 np0005544708 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec  3 15:34:24 np0005544708 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec  3 15:34:24 np0005544708 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec  3 15:34:24 np0005544708 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec  3 15:34:24 np0005544708 systemd[1]: Closed udev Control Socket.
Dec  3 15:34:24 np0005544708 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec  3 15:34:24 np0005544708 systemd[1]: Closed udev Kernel Socket.
Dec  3 15:34:24 np0005544708 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec  3 15:34:24 np0005544708 systemd[1]: Stopped dracut pre-udev hook.
Dec  3 15:34:24 np0005544708 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec  3 15:34:24 np0005544708 systemd[1]: Stopped dracut cmdline hook.
Dec  3 15:34:24 np0005544708 systemd[1]: Starting Cleanup udev Database...
Dec  3 15:34:24 np0005544708 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec  3 15:34:24 np0005544708 systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec  3 15:34:24 np0005544708 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec  3 15:34:24 np0005544708 systemd[1]: Stopped Create List of Static Device Nodes.
Dec  3 15:34:24 np0005544708 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec  3 15:34:24 np0005544708 systemd[1]: Stopped Create System Users.
Dec  3 15:34:24 np0005544708 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Dec  3 15:34:24 np0005544708 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Dec  3 15:34:24 np0005544708 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec  3 15:34:24 np0005544708 systemd[1]: Finished Cleanup udev Database.
Dec  3 15:34:24 np0005544708 systemd[1]: Reached target Switch Root.
Dec  3 15:34:24 np0005544708 systemd[1]: Starting Switch Root...
Dec  3 15:34:24 np0005544708 systemd[1]: Switching root.
Dec  3 15:34:24 np0005544708 systemd-journald[307]: Journal stopped
Dec  3 15:34:24 np0005544708 systemd-journald: Received SIGTERM from PID 1 (systemd).
Dec  3 15:34:24 np0005544708 kernel: audit: type=1404 audit(1764794064.309:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec  3 15:34:24 np0005544708 kernel: SELinux:  policy capability network_peer_controls=1
Dec  3 15:34:24 np0005544708 kernel: SELinux:  policy capability open_perms=1
Dec  3 15:34:24 np0005544708 kernel: SELinux:  policy capability extended_socket_class=1
Dec  3 15:34:24 np0005544708 kernel: SELinux:  policy capability always_check_network=0
Dec  3 15:34:24 np0005544708 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  3 15:34:24 np0005544708 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  3 15:34:24 np0005544708 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  3 15:34:24 np0005544708 kernel: audit: type=1403 audit(1764794064.443:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec  3 15:34:24 np0005544708 systemd: Successfully loaded SELinux policy in 138.089ms.
Dec  3 15:34:24 np0005544708 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.006ms.
Dec  3 15:34:24 np0005544708 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec  3 15:34:24 np0005544708 systemd: Detected virtualization kvm.
Dec  3 15:34:24 np0005544708 systemd: Detected architecture x86-64.
Dec  3 15:34:24 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 15:34:24 np0005544708 systemd: initrd-switch-root.service: Deactivated successfully.
Dec  3 15:34:24 np0005544708 systemd: Stopped Switch Root.
Dec  3 15:34:24 np0005544708 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec  3 15:34:24 np0005544708 systemd: Created slice Slice /system/getty.
Dec  3 15:34:24 np0005544708 systemd: Created slice Slice /system/serial-getty.
Dec  3 15:34:24 np0005544708 systemd: Created slice Slice /system/sshd-keygen.
Dec  3 15:34:24 np0005544708 systemd: Created slice User and Session Slice.
Dec  3 15:34:24 np0005544708 systemd: Started Dispatch Password Requests to Console Directory Watch.
Dec  3 15:34:24 np0005544708 systemd: Started Forward Password Requests to Wall Directory Watch.
Dec  3 15:34:24 np0005544708 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Dec  3 15:34:24 np0005544708 systemd: Reached target Local Encrypted Volumes.
Dec  3 15:34:24 np0005544708 systemd: Stopped target Switch Root.
Dec  3 15:34:24 np0005544708 systemd: Stopped target Initrd File Systems.
Dec  3 15:34:24 np0005544708 systemd: Stopped target Initrd Root File System.
Dec  3 15:34:24 np0005544708 systemd: Reached target Local Integrity Protected Volumes.
Dec  3 15:34:24 np0005544708 systemd: Reached target Path Units.
Dec  3 15:34:24 np0005544708 systemd: Reached target rpc_pipefs.target.
Dec  3 15:34:24 np0005544708 systemd: Reached target Slice Units.
Dec  3 15:34:24 np0005544708 systemd: Reached target Swaps.
Dec  3 15:34:24 np0005544708 systemd: Reached target Local Verity Protected Volumes.
Dec  3 15:34:24 np0005544708 systemd: Listening on RPCbind Server Activation Socket.
Dec  3 15:34:24 np0005544708 systemd: Reached target RPC Port Mapper.
Dec  3 15:34:24 np0005544708 systemd: Listening on Process Core Dump Socket.
Dec  3 15:34:24 np0005544708 systemd: Listening on initctl Compatibility Named Pipe.
Dec  3 15:34:24 np0005544708 systemd: Listening on udev Control Socket.
Dec  3 15:34:24 np0005544708 systemd: Listening on udev Kernel Socket.
Dec  3 15:34:24 np0005544708 systemd: Mounting Huge Pages File System...
Dec  3 15:34:24 np0005544708 systemd: Mounting POSIX Message Queue File System...
Dec  3 15:34:24 np0005544708 systemd: Mounting Kernel Debug File System...
Dec  3 15:34:24 np0005544708 systemd: Mounting Kernel Trace File System...
Dec  3 15:34:24 np0005544708 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec  3 15:34:24 np0005544708 systemd: Starting Create List of Static Device Nodes...
Dec  3 15:34:24 np0005544708 systemd: Starting Load Kernel Module configfs...
Dec  3 15:34:24 np0005544708 systemd: Starting Load Kernel Module drm...
Dec  3 15:34:24 np0005544708 systemd: Starting Load Kernel Module efi_pstore...
Dec  3 15:34:24 np0005544708 systemd: Starting Load Kernel Module fuse...
Dec  3 15:34:24 np0005544708 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Dec  3 15:34:24 np0005544708 systemd: systemd-fsck-root.service: Deactivated successfully.
Dec  3 15:34:24 np0005544708 systemd: Stopped File System Check on Root Device.
Dec  3 15:34:24 np0005544708 systemd: Stopped Journal Service.
Dec  3 15:34:24 np0005544708 kernel: fuse: init (API version 7.37)
Dec  3 15:34:24 np0005544708 systemd: Starting Journal Service...
Dec  3 15:34:24 np0005544708 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec  3 15:34:24 np0005544708 systemd: Starting Generate network units from Kernel command line...
Dec  3 15:34:24 np0005544708 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  3 15:34:24 np0005544708 systemd: Starting Remount Root and Kernel File Systems...
Dec  3 15:34:24 np0005544708 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Dec  3 15:34:24 np0005544708 systemd: Starting Apply Kernel Variables...
Dec  3 15:34:24 np0005544708 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Dec  3 15:34:24 np0005544708 systemd: Starting Coldplug All udev Devices...
Dec  3 15:34:24 np0005544708 systemd-journald[684]: Journal started
Dec  3 15:34:24 np0005544708 systemd-journald[684]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec  3 15:34:24 np0005544708 systemd[1]: Queued start job for default target Multi-User System.
Dec  3 15:34:24 np0005544708 systemd[1]: systemd-journald.service: Deactivated successfully.
Dec  3 15:34:24 np0005544708 systemd: Started Journal Service.
Dec  3 15:34:24 np0005544708 systemd[1]: Mounted Huge Pages File System.
Dec  3 15:34:24 np0005544708 systemd[1]: Mounted POSIX Message Queue File System.
Dec  3 15:34:24 np0005544708 systemd[1]: Mounted Kernel Debug File System.
Dec  3 15:34:24 np0005544708 systemd[1]: Mounted Kernel Trace File System.
Dec  3 15:34:24 np0005544708 kernel: ACPI: bus type drm_connector registered
Dec  3 15:34:24 np0005544708 systemd[1]: Finished Create List of Static Device Nodes.
Dec  3 15:34:24 np0005544708 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  3 15:34:24 np0005544708 systemd[1]: Finished Load Kernel Module configfs.
Dec  3 15:34:24 np0005544708 systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec  3 15:34:24 np0005544708 systemd[1]: Finished Load Kernel Module drm.
Dec  3 15:34:24 np0005544708 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec  3 15:34:24 np0005544708 systemd[1]: Finished Load Kernel Module efi_pstore.
Dec  3 15:34:24 np0005544708 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec  3 15:34:24 np0005544708 systemd[1]: Finished Load Kernel Module fuse.
Dec  3 15:34:24 np0005544708 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Dec  3 15:34:24 np0005544708 systemd[1]: Finished Generate network units from Kernel command line.
Dec  3 15:34:24 np0005544708 systemd[1]: Finished Remount Root and Kernel File Systems.
Dec  3 15:34:24 np0005544708 systemd[1]: Finished Apply Kernel Variables.
Dec  3 15:34:25 np0005544708 systemd[1]: Mounting FUSE Control File System...
Dec  3 15:34:25 np0005544708 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec  3 15:34:25 np0005544708 systemd[1]: Starting Rebuild Hardware Database...
Dec  3 15:34:25 np0005544708 systemd[1]: Starting Flush Journal to Persistent Storage...
Dec  3 15:34:25 np0005544708 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec  3 15:34:25 np0005544708 systemd[1]: Starting Load/Save OS Random Seed...
Dec  3 15:34:25 np0005544708 systemd[1]: Starting Create System Users...
Dec  3 15:34:25 np0005544708 systemd[1]: Mounted FUSE Control File System.
Dec  3 15:34:25 np0005544708 systemd-journald[684]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec  3 15:34:25 np0005544708 systemd-journald[684]: Received client request to flush runtime journal.
Dec  3 15:34:25 np0005544708 systemd[1]: Finished Flush Journal to Persistent Storage.
Dec  3 15:34:25 np0005544708 systemd[1]: Finished Load/Save OS Random Seed.
Dec  3 15:34:25 np0005544708 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec  3 15:34:25 np0005544708 systemd[1]: Finished Coldplug All udev Devices.
Dec  3 15:34:25 np0005544708 systemd[1]: Finished Create System Users.
Dec  3 15:34:25 np0005544708 systemd[1]: Starting Create Static Device Nodes in /dev...
Dec  3 15:34:25 np0005544708 systemd[1]: Finished Create Static Device Nodes in /dev.
Dec  3 15:34:25 np0005544708 systemd[1]: Reached target Preparation for Local File Systems.
Dec  3 15:34:25 np0005544708 systemd[1]: Reached target Local File Systems.
Dec  3 15:34:25 np0005544708 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Dec  3 15:34:25 np0005544708 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Dec  3 15:34:25 np0005544708 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec  3 15:34:25 np0005544708 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Dec  3 15:34:25 np0005544708 systemd[1]: Starting Automatic Boot Loader Update...
Dec  3 15:34:25 np0005544708 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Dec  3 15:34:25 np0005544708 systemd[1]: Starting Create Volatile Files and Directories...
Dec  3 15:34:25 np0005544708 bootctl[700]: Couldn't find EFI system partition, skipping.
Dec  3 15:34:25 np0005544708 systemd[1]: Finished Automatic Boot Loader Update.
Dec  3 15:34:25 np0005544708 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Dec  3 15:34:25 np0005544708 systemd[1]: Finished Create Volatile Files and Directories.
Dec  3 15:34:25 np0005544708 systemd[1]: Starting Security Auditing Service...
Dec  3 15:34:25 np0005544708 systemd[1]: Starting RPC Bind...
Dec  3 15:34:25 np0005544708 systemd[1]: Starting Rebuild Journal Catalog...
Dec  3 15:34:25 np0005544708 auditd[706]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Dec  3 15:34:25 np0005544708 auditd[706]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Dec  3 15:34:25 np0005544708 systemd[1]: Finished Rebuild Journal Catalog.
Dec  3 15:34:25 np0005544708 systemd[1]: Started RPC Bind.
Dec  3 15:34:25 np0005544708 augenrules[711]: /sbin/augenrules: No change
Dec  3 15:34:25 np0005544708 augenrules[726]: No rules
Dec  3 15:34:25 np0005544708 augenrules[726]: enabled 1
Dec  3 15:34:25 np0005544708 augenrules[726]: failure 1
Dec  3 15:34:25 np0005544708 augenrules[726]: pid 706
Dec  3 15:34:25 np0005544708 augenrules[726]: rate_limit 0
Dec  3 15:34:25 np0005544708 augenrules[726]: backlog_limit 8192
Dec  3 15:34:25 np0005544708 augenrules[726]: lost 0
Dec  3 15:34:25 np0005544708 augenrules[726]: backlog 3
Dec  3 15:34:25 np0005544708 augenrules[726]: backlog_wait_time 60000
Dec  3 15:34:25 np0005544708 augenrules[726]: backlog_wait_time_actual 0
Dec  3 15:34:25 np0005544708 augenrules[726]: enabled 1
Dec  3 15:34:25 np0005544708 augenrules[726]: failure 1
Dec  3 15:34:25 np0005544708 augenrules[726]: pid 706
Dec  3 15:34:25 np0005544708 augenrules[726]: rate_limit 0
Dec  3 15:34:25 np0005544708 augenrules[726]: backlog_limit 8192
Dec  3 15:34:25 np0005544708 augenrules[726]: lost 0
Dec  3 15:34:25 np0005544708 augenrules[726]: backlog 0
Dec  3 15:34:25 np0005544708 augenrules[726]: backlog_wait_time 60000
Dec  3 15:34:25 np0005544708 augenrules[726]: backlog_wait_time_actual 0
Dec  3 15:34:25 np0005544708 augenrules[726]: enabled 1
Dec  3 15:34:25 np0005544708 augenrules[726]: failure 1
Dec  3 15:34:25 np0005544708 augenrules[726]: pid 706
Dec  3 15:34:25 np0005544708 augenrules[726]: rate_limit 0
Dec  3 15:34:25 np0005544708 augenrules[726]: backlog_limit 8192
Dec  3 15:34:25 np0005544708 augenrules[726]: lost 0
Dec  3 15:34:25 np0005544708 augenrules[726]: backlog 0
Dec  3 15:34:25 np0005544708 augenrules[726]: backlog_wait_time 60000
Dec  3 15:34:25 np0005544708 augenrules[726]: backlog_wait_time_actual 0
Dec  3 15:34:25 np0005544708 systemd[1]: Started Security Auditing Service.
Dec  3 15:34:25 np0005544708 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Dec  3 15:34:25 np0005544708 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Dec  3 15:34:25 np0005544708 systemd[1]: Finished Rebuild Hardware Database.
Dec  3 15:34:25 np0005544708 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec  3 15:34:25 np0005544708 systemd[1]: Starting Update is Completed...
Dec  3 15:34:25 np0005544708 systemd[1]: Finished Update is Completed.
Dec  3 15:34:25 np0005544708 systemd-udevd[734]: Using default interface naming scheme 'rhel-9.0'.
Dec  3 15:34:25 np0005544708 systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec  3 15:34:25 np0005544708 systemd[1]: Reached target System Initialization.
Dec  3 15:34:25 np0005544708 systemd[1]: Started dnf makecache --timer.
Dec  3 15:34:25 np0005544708 systemd[1]: Started Daily rotation of log files.
Dec  3 15:34:25 np0005544708 systemd[1]: Started Daily Cleanup of Temporary Directories.
Dec  3 15:34:25 np0005544708 systemd[1]: Reached target Timer Units.
Dec  3 15:34:25 np0005544708 systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec  3 15:34:25 np0005544708 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Dec  3 15:34:25 np0005544708 systemd[1]: Reached target Socket Units.
Dec  3 15:34:25 np0005544708 systemd[1]: Starting D-Bus System Message Bus...
Dec  3 15:34:25 np0005544708 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  3 15:34:25 np0005544708 systemd[1]: Starting Load Kernel Module configfs...
Dec  3 15:34:25 np0005544708 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  3 15:34:25 np0005544708 systemd[1]: Finished Load Kernel Module configfs.
Dec  3 15:34:25 np0005544708 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Dec  3 15:34:25 np0005544708 systemd-udevd[748]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 15:34:25 np0005544708 systemd[1]: Started D-Bus System Message Bus.
Dec  3 15:34:25 np0005544708 systemd[1]: Reached target Basic System.
Dec  3 15:34:25 np0005544708 dbus-broker-lau[743]: Ready
Dec  3 15:34:25 np0005544708 systemd[1]: Starting NTP client/server...
Dec  3 15:34:25 np0005544708 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Dec  3 15:34:25 np0005544708 systemd[1]: Starting Restore /run/initramfs on shutdown...
Dec  3 15:34:25 np0005544708 systemd[1]: Starting IPv4 firewall with iptables...
Dec  3 15:34:25 np0005544708 systemd[1]: Started irqbalance daemon.
Dec  3 15:34:25 np0005544708 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Dec  3 15:34:25 np0005544708 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  3 15:34:25 np0005544708 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  3 15:34:25 np0005544708 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  3 15:34:25 np0005544708 systemd[1]: Reached target sshd-keygen.target.
Dec  3 15:34:25 np0005544708 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Dec  3 15:34:25 np0005544708 systemd[1]: Reached target User and Group Name Lookups.
Dec  3 15:34:25 np0005544708 systemd[1]: Starting User Login Management...
Dec  3 15:34:25 np0005544708 systemd[1]: Finished Restore /run/initramfs on shutdown.
Dec  3 15:34:25 np0005544708 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Dec  3 15:34:25 np0005544708 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Dec  3 15:34:25 np0005544708 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Dec  3 15:34:25 np0005544708 chronyd[798]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec  3 15:34:25 np0005544708 systemd[1]: Started NTP client/server.
Dec  3 15:34:25 np0005544708 chronyd[798]: Loaded 0 symmetric keys
Dec  3 15:34:25 np0005544708 chronyd[798]: Using right/UTC timezone to obtain leap second data
Dec  3 15:34:25 np0005544708 chronyd[798]: Loaded seccomp filter (level 2)
Dec  3 15:34:25 np0005544708 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Dec  3 15:34:25 np0005544708 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Dec  3 15:34:25 np0005544708 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Dec  3 15:34:25 np0005544708 systemd-logind[787]: Watching system buttons on /dev/input/event0 (Power Button)
Dec  3 15:34:25 np0005544708 systemd-logind[787]: New seat seat0.
Dec  3 15:34:25 np0005544708 systemd[1]: Started User Login Management.
Dec  3 15:34:25 np0005544708 systemd-logind[787]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec  3 15:34:26 np0005544708 kernel: kvm_amd: TSC scaling supported
Dec  3 15:34:26 np0005544708 kernel: kvm_amd: Nested Virtualization enabled
Dec  3 15:34:26 np0005544708 kernel: kvm_amd: Nested Paging enabled
Dec  3 15:34:26 np0005544708 kernel: kvm_amd: LBR virtualization supported
Dec  3 15:34:26 np0005544708 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Dec  3 15:34:26 np0005544708 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Dec  3 15:34:26 np0005544708 kernel: Console: switching to colour dummy device 80x25
Dec  3 15:34:26 np0005544708 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec  3 15:34:26 np0005544708 kernel: [drm] features: -context_init
Dec  3 15:34:26 np0005544708 kernel: [drm] number of scanouts: 1
Dec  3 15:34:26 np0005544708 kernel: [drm] number of cap sets: 0
Dec  3 15:34:26 np0005544708 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Dec  3 15:34:26 np0005544708 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Dec  3 15:34:26 np0005544708 kernel: Console: switching to colour frame buffer device 128x48
Dec  3 15:34:26 np0005544708 iptables.init[781]: iptables: Applying firewall rules: [  OK  ]
Dec  3 15:34:26 np0005544708 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec  3 15:34:26 np0005544708 systemd[1]: Finished IPv4 firewall with iptables.
Dec  3 15:34:26 np0005544708 cloud-init[842]: Cloud-init v. 24.4-7.el9 running 'init-local' at Wed, 03 Dec 2025 20:34:26 +0000. Up 5.89 seconds.
Dec  3 15:34:26 np0005544708 systemd[1]: run-cloud\x2dinit-tmp-tmp7wgqn6ps.mount: Deactivated successfully.
Dec  3 15:34:26 np0005544708 systemd[1]: Starting Hostname Service...
Dec  3 15:34:26 np0005544708 systemd[1]: Started Hostname Service.
Dec  3 15:34:26 np0005544708 systemd-hostnamed[856]: Hostname set to <np0005544708.novalocal> (static)
Dec  3 15:34:26 np0005544708 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Dec  3 15:34:26 np0005544708 systemd[1]: Reached target Preparation for Network.
Dec  3 15:34:26 np0005544708 systemd[1]: Starting Network Manager...
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.8691] NetworkManager (version 1.54.1-1.el9) is starting... (boot:cb12512a-3aa8-4735-9c82-f409c246c155)
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.8697] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.8778] manager[0x5556d1397080]: monitoring kernel firmware directory '/lib/firmware'.
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.8812] hostname: hostname: using hostnamed
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.8812] hostname: static hostname changed from (none) to "np0005544708.novalocal"
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.8817] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.8913] manager[0x5556d1397080]: rfkill: Wi-Fi hardware radio set enabled
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.8914] manager[0x5556d1397080]: rfkill: WWAN hardware radio set enabled
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.8956] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  3 15:34:26 np0005544708 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.8957] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.8959] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.8960] manager: Networking is enabled by state file
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.8962] settings: Loaded settings plugin: keyfile (internal)
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.8976] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.8996] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.9011] dhcp: init: Using DHCP client 'internal'
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.9014] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.9031] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.9040] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.9049] device (lo): Activation: starting connection 'lo' (2dc71c9a-e258-42ef-b117-38009802277f)
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.9059] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.9063] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.9089] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.9095] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.9099] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.9102] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.9104] device (eth0): carrier: link connected
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.9109] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.9116] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec  3 15:34:26 np0005544708 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.9123] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.9128] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.9129] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.9132] manager: NetworkManager state is now CONNECTING
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.9133] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.9144] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.9147] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  3 15:34:26 np0005544708 systemd[1]: Started Network Manager.
Dec  3 15:34:26 np0005544708 systemd[1]: Reached target Network.
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.9191] dhcp4 (eth0): state changed new lease, address=38.102.83.219
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.9200] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.9221] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  3 15:34:26 np0005544708 systemd[1]: Starting Network Manager Wait Online...
Dec  3 15:34:26 np0005544708 systemd[1]: Starting GSSAPI Proxy Daemon...
Dec  3 15:34:26 np0005544708 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.9389] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.9392] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.9393] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.9400] device (lo): Activation: successful, device activated.
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.9406] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.9410] manager: NetworkManager state is now CONNECTED_SITE
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.9414] device (eth0): Activation: successful, device activated.
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.9419] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  3 15:34:26 np0005544708 NetworkManager[860]: <info>  [1764794066.9422] manager: startup complete
Dec  3 15:34:26 np0005544708 systemd[1]: Started GSSAPI Proxy Daemon.
Dec  3 15:34:26 np0005544708 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec  3 15:34:26 np0005544708 systemd[1]: Reached target NFS client services.
Dec  3 15:34:26 np0005544708 systemd[1]: Reached target Preparation for Remote File Systems.
Dec  3 15:34:26 np0005544708 systemd[1]: Reached target Remote File Systems.
Dec  3 15:34:26 np0005544708 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  3 15:34:26 np0005544708 systemd[1]: Finished Network Manager Wait Online.
Dec  3 15:34:26 np0005544708 systemd[1]: Starting Cloud-init: Network Stage...
Dec  3 15:34:27 np0005544708 cloud-init[923]: Cloud-init v. 24.4-7.el9 running 'init' at Wed, 03 Dec 2025 20:34:27 +0000. Up 6.91 seconds.
Dec  3 15:34:27 np0005544708 cloud-init[923]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Dec  3 15:34:27 np0005544708 cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  3 15:34:27 np0005544708 cloud-init[923]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Dec  3 15:34:27 np0005544708 cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  3 15:34:27 np0005544708 cloud-init[923]: ci-info: |  eth0  | True |        38.102.83.219         | 255.255.255.0 | global | fa:16:3e:28:69:f5 |
Dec  3 15:34:27 np0005544708 cloud-init[923]: ci-info: |  eth0  | True | fe80::f816:3eff:fe28:69f5/64 |       .       |  link  | fa:16:3e:28:69:f5 |
Dec  3 15:34:27 np0005544708 cloud-init[923]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Dec  3 15:34:27 np0005544708 cloud-init[923]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Dec  3 15:34:27 np0005544708 cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  3 15:34:27 np0005544708 cloud-init[923]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Dec  3 15:34:27 np0005544708 cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  3 15:34:27 np0005544708 cloud-init[923]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Dec  3 15:34:27 np0005544708 cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  3 15:34:27 np0005544708 cloud-init[923]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Dec  3 15:34:27 np0005544708 cloud-init[923]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Dec  3 15:34:27 np0005544708 cloud-init[923]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Dec  3 15:34:27 np0005544708 cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  3 15:34:27 np0005544708 cloud-init[923]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Dec  3 15:34:27 np0005544708 cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  3 15:34:27 np0005544708 cloud-init[923]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Dec  3 15:34:27 np0005544708 cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  3 15:34:27 np0005544708 cloud-init[923]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Dec  3 15:34:27 np0005544708 cloud-init[923]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Dec  3 15:34:27 np0005544708 cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  3 15:34:28 np0005544708 cloud-init[923]: Generating public/private rsa key pair.
Dec  3 15:34:28 np0005544708 cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Dec  3 15:34:28 np0005544708 cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Dec  3 15:34:28 np0005544708 cloud-init[923]: The key fingerprint is:
Dec  3 15:34:28 np0005544708 cloud-init[923]: SHA256:YAOdlDquc6Fe8SNA1LKtvklICtWqcMSeI1+YY8n33tM root@np0005544708.novalocal
Dec  3 15:34:28 np0005544708 cloud-init[923]: The key's randomart image is:
Dec  3 15:34:28 np0005544708 cloud-init[923]: +---[RSA 3072]----+
Dec  3 15:34:28 np0005544708 cloud-init[923]: |  ...o.o         |
Dec  3 15:34:28 np0005544708 cloud-init[923]: | o....+          |
Dec  3 15:34:28 np0005544708 cloud-init[923]: |  =+..+          |
Dec  3 15:34:28 np0005544708 cloud-init[923]: | *.*+. o         |
Dec  3 15:34:28 np0005544708 cloud-init[923]: |+.#o+.  S        |
Dec  3 15:34:28 np0005544708 cloud-init[923]: |=B.*o+           |
Dec  3 15:34:28 np0005544708 cloud-init[923]: |+.ooo.+  .       |
Dec  3 15:34:28 np0005544708 cloud-init[923]: | .=o.o o. E      |
Dec  3 15:34:28 np0005544708 cloud-init[923]: | .++  . ..       |
Dec  3 15:34:28 np0005544708 cloud-init[923]: +----[SHA256]-----+
Dec  3 15:34:28 np0005544708 cloud-init[923]: Generating public/private ecdsa key pair.
Dec  3 15:34:28 np0005544708 cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Dec  3 15:34:28 np0005544708 cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Dec  3 15:34:28 np0005544708 cloud-init[923]: The key fingerprint is:
Dec  3 15:34:28 np0005544708 cloud-init[923]: SHA256:0oqKrqywu4tzYmMxViC3pHm53h4s8MeGMaGkrx8VNiY root@np0005544708.novalocal
Dec  3 15:34:28 np0005544708 cloud-init[923]: The key's randomart image is:
Dec  3 15:34:28 np0005544708 cloud-init[923]: +---[ECDSA 256]---+
Dec  3 15:34:28 np0005544708 cloud-init[923]: |                 |
Dec  3 15:34:28 np0005544708 cloud-init[923]: |..o              |
Dec  3 15:34:28 np0005544708 cloud-init[923]: |.*Eo=            |
Dec  3 15:34:28 np0005544708 cloud-init[923]: |=.+* o .         |
Dec  3 15:34:28 np0005544708 cloud-init[923]: |+.+.. . S        |
Dec  3 15:34:28 np0005544708 cloud-init[923]: | B.B . o         |
Dec  3 15:34:28 np0005544708 cloud-init[923]: |o.O.B .          |
Dec  3 15:34:28 np0005544708 cloud-init[923]: |*Bo*..           |
Dec  3 15:34:28 np0005544708 cloud-init[923]: |^@+..            |
Dec  3 15:34:28 np0005544708 cloud-init[923]: +----[SHA256]-----+
Dec  3 15:34:28 np0005544708 cloud-init[923]: Generating public/private ed25519 key pair.
Dec  3 15:34:28 np0005544708 cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Dec  3 15:34:28 np0005544708 cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Dec  3 15:34:28 np0005544708 cloud-init[923]: The key fingerprint is:
Dec  3 15:34:28 np0005544708 cloud-init[923]: SHA256:G3SRB4rAJKwfwBspc1v9P/VfxSmeg7EUWPrf5xVp1sE root@np0005544708.novalocal
Dec  3 15:34:28 np0005544708 cloud-init[923]: The key's randomart image is:
Dec  3 15:34:28 np0005544708 cloud-init[923]: +--[ED25519 256]--+
Dec  3 15:34:28 np0005544708 cloud-init[923]: |..ooo.    =+     |
Dec  3 15:34:28 np0005544708 cloud-init[923]: |++o.o... ooo. .  |
Dec  3 15:34:28 np0005544708 cloud-init[923]: |.=oo  ..o....  E.|
Dec  3 15:34:28 np0005544708 cloud-init[923]: |..o    ....o.. .*|
Dec  3 15:34:28 np0005544708 cloud-init[923]: | . .    S..o=.o=o|
Dec  3 15:34:28 np0005544708 cloud-init[923]: |  .      ooo.+= o|
Dec  3 15:34:28 np0005544708 cloud-init[923]: |        .  . ..o+|
Dec  3 15:34:28 np0005544708 cloud-init[923]: |               .+|
Dec  3 15:34:28 np0005544708 cloud-init[923]: |                .|
Dec  3 15:34:28 np0005544708 cloud-init[923]: +----[SHA256]-----+
Dec  3 15:34:28 np0005544708 systemd[1]: Finished Cloud-init: Network Stage.
Dec  3 15:34:28 np0005544708 systemd[1]: Reached target Cloud-config availability.
Dec  3 15:34:28 np0005544708 systemd[1]: Reached target Network is Online.
Dec  3 15:34:28 np0005544708 systemd[1]: Starting Cloud-init: Config Stage...
Dec  3 15:34:28 np0005544708 systemd[1]: Starting Crash recovery kernel arming...
Dec  3 15:34:28 np0005544708 systemd[1]: Starting Notify NFS peers of a restart...
Dec  3 15:34:28 np0005544708 systemd[1]: Starting System Logging Service...
Dec  3 15:34:28 np0005544708 systemd[1]: Starting OpenSSH server daemon...
Dec  3 15:34:28 np0005544708 sm-notify[1005]: Version 2.5.4 starting
Dec  3 15:34:28 np0005544708 systemd[1]: Starting Permit User Sessions...
Dec  3 15:34:28 np0005544708 systemd[1]: Started Notify NFS peers of a restart.
Dec  3 15:34:28 np0005544708 systemd[1]: Started OpenSSH server daemon.
Dec  3 15:34:28 np0005544708 systemd[1]: Finished Permit User Sessions.
Dec  3 15:34:28 np0005544708 systemd[1]: Started Command Scheduler.
Dec  3 15:34:28 np0005544708 systemd[1]: Started Getty on tty1.
Dec  3 15:34:28 np0005544708 systemd[1]: Started Serial Getty on ttyS0.
Dec  3 15:34:28 np0005544708 systemd[1]: Reached target Login Prompts.
Dec  3 15:34:28 np0005544708 rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] start
Dec  3 15:34:28 np0005544708 systemd[1]: Started System Logging Service.
Dec  3 15:34:28 np0005544708 rsyslogd[1006]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Dec  3 15:34:28 np0005544708 systemd[1]: Reached target Multi-User System.
Dec  3 15:34:28 np0005544708 systemd[1]: Starting Record Runlevel Change in UTMP...
Dec  3 15:34:28 np0005544708 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec  3 15:34:28 np0005544708 systemd[1]: Finished Record Runlevel Change in UTMP.
Dec  3 15:34:28 np0005544708 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 15:34:28 np0005544708 kdumpctl[1021]: kdump: No kdump initial ramdisk found.
Dec  3 15:34:28 np0005544708 kdumpctl[1021]: kdump: Rebuilding /boot/initramfs-5.14.0-645.el9.x86_64kdump.img
Dec  3 15:34:28 np0005544708 cloud-init[1154]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Wed, 03 Dec 2025 20:34:28 +0000. Up 8.41 seconds.
Dec  3 15:34:28 np0005544708 systemd[1]: Finished Cloud-init: Config Stage.
Dec  3 15:34:28 np0005544708 systemd[1]: Starting Cloud-init: Final Stage...
Dec  3 15:34:29 np0005544708 dracut[1285]: dracut-057-102.git20250818.el9
Dec  3 15:34:29 np0005544708 cloud-init[1303]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Wed, 03 Dec 2025 20:34:29 +0000. Up 8.79 seconds.
Dec  3 15:34:29 np0005544708 cloud-init[1310]: #############################################################
Dec  3 15:34:29 np0005544708 cloud-init[1315]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Dec  3 15:34:29 np0005544708 dracut[1287]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-645.el9.x86_64kdump.img 5.14.0-645.el9.x86_64
Dec  3 15:34:29 np0005544708 cloud-init[1324]: 256 SHA256:0oqKrqywu4tzYmMxViC3pHm53h4s8MeGMaGkrx8VNiY root@np0005544708.novalocal (ECDSA)
Dec  3 15:34:29 np0005544708 cloud-init[1329]: 256 SHA256:G3SRB4rAJKwfwBspc1v9P/VfxSmeg7EUWPrf5xVp1sE root@np0005544708.novalocal (ED25519)
Dec  3 15:34:29 np0005544708 cloud-init[1333]: 3072 SHA256:YAOdlDquc6Fe8SNA1LKtvklICtWqcMSeI1+YY8n33tM root@np0005544708.novalocal (RSA)
Dec  3 15:34:29 np0005544708 cloud-init[1336]: -----END SSH HOST KEY FINGERPRINTS-----
Dec  3 15:34:29 np0005544708 cloud-init[1338]: #############################################################
Dec  3 15:34:29 np0005544708 cloud-init[1303]: Cloud-init v. 24.4-7.el9 finished at Wed, 03 Dec 2025 20:34:29 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 8.97 seconds
Dec  3 15:34:29 np0005544708 systemd[1]: Finished Cloud-init: Final Stage.
Dec  3 15:34:29 np0005544708 systemd[1]: Reached target Cloud-init target.
Dec  3 15:34:29 np0005544708 dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Dec  3 15:34:29 np0005544708 dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Dec  3 15:34:29 np0005544708 dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Dec  3 15:34:29 np0005544708 dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec  3 15:34:29 np0005544708 dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec  3 15:34:29 np0005544708 dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec  3 15:34:29 np0005544708 dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec  3 15:34:29 np0005544708 dracut[1287]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec  3 15:34:29 np0005544708 dracut[1287]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec  3 15:34:29 np0005544708 dracut[1287]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec  3 15:34:29 np0005544708 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec  3 15:34:29 np0005544708 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec  3 15:34:29 np0005544708 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec  3 15:34:29 np0005544708 dracut[1287]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec  3 15:34:29 np0005544708 dracut[1287]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec  3 15:34:29 np0005544708 dracut[1287]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec  3 15:34:29 np0005544708 dracut[1287]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec  3 15:34:29 np0005544708 dracut[1287]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec  3 15:34:29 np0005544708 dracut[1287]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec  3 15:34:30 np0005544708 dracut[1287]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec  3 15:34:30 np0005544708 dracut[1287]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec  3 15:34:30 np0005544708 dracut[1287]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec  3 15:34:30 np0005544708 dracut[1287]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec  3 15:34:30 np0005544708 dracut[1287]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec  3 15:34:30 np0005544708 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec  3 15:34:30 np0005544708 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec  3 15:34:30 np0005544708 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec  3 15:34:30 np0005544708 dracut[1287]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec  3 15:34:30 np0005544708 dracut[1287]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Dec  3 15:34:30 np0005544708 dracut[1287]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec  3 15:34:30 np0005544708 dracut[1287]: memstrack is not available
Dec  3 15:34:30 np0005544708 dracut[1287]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec  3 15:34:30 np0005544708 dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec  3 15:34:30 np0005544708 dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec  3 15:34:30 np0005544708 dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec  3 15:34:30 np0005544708 dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec  3 15:34:30 np0005544708 dracut[1287]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec  3 15:34:30 np0005544708 dracut[1287]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec  3 15:34:30 np0005544708 dracut[1287]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec  3 15:34:30 np0005544708 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec  3 15:34:30 np0005544708 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec  3 15:34:30 np0005544708 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec  3 15:34:30 np0005544708 dracut[1287]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec  3 15:34:30 np0005544708 dracut[1287]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec  3 15:34:30 np0005544708 dracut[1287]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec  3 15:34:30 np0005544708 dracut[1287]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec  3 15:34:30 np0005544708 dracut[1287]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec  3 15:34:30 np0005544708 dracut[1287]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec  3 15:34:30 np0005544708 dracut[1287]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec  3 15:34:30 np0005544708 dracut[1287]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec  3 15:34:30 np0005544708 dracut[1287]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec  3 15:34:30 np0005544708 dracut[1287]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec  3 15:34:30 np0005544708 dracut[1287]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec  3 15:34:30 np0005544708 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec  3 15:34:30 np0005544708 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec  3 15:34:30 np0005544708 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec  3 15:34:30 np0005544708 dracut[1287]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec  3 15:34:30 np0005544708 dracut[1287]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec  3 15:34:30 np0005544708 dracut[1287]: memstrack is not available
Dec  3 15:34:30 np0005544708 dracut[1287]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec  3 15:34:30 np0005544708 dracut[1287]: *** Including module: systemd ***
Dec  3 15:34:31 np0005544708 dracut[1287]: *** Including module: fips ***
Dec  3 15:34:31 np0005544708 dracut[1287]: *** Including module: systemd-initrd ***
Dec  3 15:34:31 np0005544708 dracut[1287]: *** Including module: i18n ***
Dec  3 15:34:31 np0005544708 dracut[1287]: *** Including module: drm ***
Dec  3 15:34:31 np0005544708 chronyd[798]: Selected source 174.142.148.226 (2.centos.pool.ntp.org)
Dec  3 15:34:31 np0005544708 chronyd[798]: System clock TAI offset set to 37 seconds
Dec  3 15:34:32 np0005544708 dracut[1287]: *** Including module: prefixdevname ***
Dec  3 15:34:32 np0005544708 dracut[1287]: *** Including module: kernel-modules ***
Dec  3 15:34:32 np0005544708 kernel: block vda: the capability attribute has been deprecated.
Dec  3 15:34:32 np0005544708 dracut[1287]: *** Including module: kernel-modules-extra ***
Dec  3 15:34:32 np0005544708 dracut[1287]: *** Including module: qemu ***
Dec  3 15:34:33 np0005544708 dracut[1287]: *** Including module: fstab-sys ***
Dec  3 15:34:33 np0005544708 dracut[1287]: *** Including module: rootfs-block ***
Dec  3 15:34:33 np0005544708 dracut[1287]: *** Including module: terminfo ***
Dec  3 15:34:33 np0005544708 dracut[1287]: *** Including module: udev-rules ***
Dec  3 15:34:33 np0005544708 dracut[1287]: Skipping udev rule: 91-permissions.rules
Dec  3 15:34:33 np0005544708 dracut[1287]: Skipping udev rule: 80-drivers-modprobe.rules
Dec  3 15:34:33 np0005544708 dracut[1287]: *** Including module: virtiofs ***
Dec  3 15:34:33 np0005544708 dracut[1287]: *** Including module: dracut-systemd ***
Dec  3 15:34:34 np0005544708 dracut[1287]: *** Including module: usrmount ***
Dec  3 15:34:34 np0005544708 dracut[1287]: *** Including module: base ***
Dec  3 15:34:34 np0005544708 dracut[1287]: *** Including module: fs-lib ***
Dec  3 15:34:34 np0005544708 dracut[1287]: *** Including module: kdumpbase ***
Dec  3 15:34:34 np0005544708 dracut[1287]: *** Including module: microcode_ctl-fw_dir_override ***
Dec  3 15:34:34 np0005544708 dracut[1287]:  microcode_ctl module: mangling fw_dir
Dec  3 15:34:34 np0005544708 dracut[1287]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Dec  3 15:34:34 np0005544708 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Dec  3 15:34:34 np0005544708 dracut[1287]:    microcode_ctl: configuration "intel" is ignored
Dec  3 15:34:34 np0005544708 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Dec  3 15:34:34 np0005544708 dracut[1287]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Dec  3 15:34:34 np0005544708 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Dec  3 15:34:34 np0005544708 dracut[1287]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Dec  3 15:34:34 np0005544708 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Dec  3 15:34:34 np0005544708 dracut[1287]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Dec  3 15:34:34 np0005544708 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Dec  3 15:34:34 np0005544708 dracut[1287]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Dec  3 15:34:34 np0005544708 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Dec  3 15:34:35 np0005544708 dracut[1287]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Dec  3 15:34:35 np0005544708 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Dec  3 15:34:35 np0005544708 dracut[1287]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Dec  3 15:34:35 np0005544708 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Dec  3 15:34:35 np0005544708 dracut[1287]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Dec  3 15:34:35 np0005544708 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Dec  3 15:34:35 np0005544708 dracut[1287]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Dec  3 15:34:35 np0005544708 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Dec  3 15:34:35 np0005544708 dracut[1287]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Dec  3 15:34:35 np0005544708 dracut[1287]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Dec  3 15:34:35 np0005544708 dracut[1287]: *** Including module: openssl ***
Dec  3 15:34:35 np0005544708 dracut[1287]: *** Including module: shutdown ***
Dec  3 15:34:35 np0005544708 dracut[1287]: *** Including module: squash ***
Dec  3 15:34:35 np0005544708 dracut[1287]: *** Including modules done ***
Dec  3 15:34:35 np0005544708 dracut[1287]: *** Installing kernel module dependencies ***
Dec  3 15:34:36 np0005544708 dracut[1287]: *** Installing kernel module dependencies done ***
Dec  3 15:34:36 np0005544708 dracut[1287]: *** Resolving executable dependencies ***
Dec  3 15:34:36 np0005544708 irqbalance[782]: Cannot change IRQ 35 affinity: Operation not permitted
Dec  3 15:34:36 np0005544708 irqbalance[782]: IRQ 35 affinity is now unmanaged
Dec  3 15:34:36 np0005544708 irqbalance[782]: Cannot change IRQ 25 affinity: Operation not permitted
Dec  3 15:34:36 np0005544708 irqbalance[782]: IRQ 25 affinity is now unmanaged
Dec  3 15:34:36 np0005544708 irqbalance[782]: Cannot change IRQ 33 affinity: Operation not permitted
Dec  3 15:34:36 np0005544708 irqbalance[782]: IRQ 33 affinity is now unmanaged
Dec  3 15:34:36 np0005544708 irqbalance[782]: Cannot change IRQ 31 affinity: Operation not permitted
Dec  3 15:34:36 np0005544708 irqbalance[782]: IRQ 31 affinity is now unmanaged
Dec  3 15:34:36 np0005544708 irqbalance[782]: Cannot change IRQ 34 affinity: Operation not permitted
Dec  3 15:34:36 np0005544708 irqbalance[782]: IRQ 34 affinity is now unmanaged
Dec  3 15:34:36 np0005544708 irqbalance[782]: Cannot change IRQ 32 affinity: Operation not permitted
Dec  3 15:34:36 np0005544708 irqbalance[782]: IRQ 32 affinity is now unmanaged
Dec  3 15:34:36 np0005544708 irqbalance[782]: Cannot change IRQ 30 affinity: Operation not permitted
Dec  3 15:34:36 np0005544708 irqbalance[782]: IRQ 30 affinity is now unmanaged
Dec  3 15:34:36 np0005544708 irqbalance[782]: Cannot change IRQ 29 affinity: Operation not permitted
Dec  3 15:34:36 np0005544708 irqbalance[782]: IRQ 29 affinity is now unmanaged
Dec  3 15:34:37 np0005544708 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  3 15:34:37 np0005544708 dracut[1287]: *** Resolving executable dependencies done ***
Dec  3 15:34:37 np0005544708 dracut[1287]: *** Generating early-microcode cpio image ***
Dec  3 15:34:37 np0005544708 dracut[1287]: *** Store current command line parameters ***
Dec  3 15:34:37 np0005544708 dracut[1287]: Stored kernel commandline:
Dec  3 15:34:37 np0005544708 dracut[1287]: No dracut internal kernel commandline stored in the initramfs
Dec  3 15:34:37 np0005544708 dracut[1287]: *** Install squash loader ***
Dec  3 15:34:38 np0005544708 dracut[1287]: *** Squashing the files inside the initramfs ***
Dec  3 15:34:39 np0005544708 dracut[1287]: *** Squashing the files inside the initramfs done ***
Dec  3 15:34:39 np0005544708 dracut[1287]: *** Creating image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' ***
Dec  3 15:34:39 np0005544708 dracut[1287]: *** Hardlinking files ***
Dec  3 15:34:39 np0005544708 dracut[1287]: *** Hardlinking files done ***
Dec  3 15:34:40 np0005544708 dracut[1287]: *** Creating initramfs image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' done ***
Dec  3 15:34:40 np0005544708 kdumpctl[1021]: kdump: kexec: loaded kdump kernel
Dec  3 15:34:40 np0005544708 kdumpctl[1021]: kdump: Starting kdump: [OK]
Dec  3 15:34:40 np0005544708 systemd[1]: Finished Crash recovery kernel arming.
Dec  3 15:34:40 np0005544708 systemd[1]: Startup finished in 1.513s (kernel) + 2.426s (initrd) + 16.552s (userspace) = 20.491s.
Dec  3 15:34:43 np0005544708 systemd[1]: Created slice User Slice of UID 1000.
Dec  3 15:34:43 np0005544708 systemd[1]: Starting User Runtime Directory /run/user/1000...
Dec  3 15:34:43 np0005544708 systemd-logind[787]: New session 1 of user zuul.
Dec  3 15:34:43 np0005544708 systemd[1]: Finished User Runtime Directory /run/user/1000.
Dec  3 15:34:43 np0005544708 systemd[1]: Starting User Manager for UID 1000...
Dec  3 15:34:43 np0005544708 systemd[4301]: Queued start job for default target Main User Target.
Dec  3 15:34:43 np0005544708 systemd[4301]: Created slice User Application Slice.
Dec  3 15:34:43 np0005544708 systemd[4301]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  3 15:34:43 np0005544708 systemd[4301]: Started Daily Cleanup of User's Temporary Directories.
Dec  3 15:34:43 np0005544708 systemd[4301]: Reached target Paths.
Dec  3 15:34:43 np0005544708 systemd[4301]: Reached target Timers.
Dec  3 15:34:43 np0005544708 systemd[4301]: Starting D-Bus User Message Bus Socket...
Dec  3 15:34:43 np0005544708 systemd[4301]: Starting Create User's Volatile Files and Directories...
Dec  3 15:34:43 np0005544708 systemd[4301]: Finished Create User's Volatile Files and Directories.
Dec  3 15:34:43 np0005544708 systemd[4301]: Listening on D-Bus User Message Bus Socket.
Dec  3 15:34:43 np0005544708 systemd[4301]: Reached target Sockets.
Dec  3 15:34:43 np0005544708 systemd[4301]: Reached target Basic System.
Dec  3 15:34:43 np0005544708 systemd[4301]: Reached target Main User Target.
Dec  3 15:34:43 np0005544708 systemd[4301]: Startup finished in 107ms.
Dec  3 15:34:43 np0005544708 systemd[1]: Started User Manager for UID 1000.
Dec  3 15:34:43 np0005544708 systemd[1]: Started Session 1 of User zuul.
Dec  3 15:34:43 np0005544708 python3[4383]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 15:34:46 np0005544708 python3[4411]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 15:34:52 np0005544708 python3[4469]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 15:34:52 np0005544708 python3[4509]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Dec  3 15:34:54 np0005544708 python3[4535]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwU1XEfVDogsv6Y2JkZySMwq4Zdohkns3qBSg0XZ4yFEAOoqZTnyPrnCKWaH3Im/T599uliDyAHCYxlL6OopZ/VWx95YbCuoI/yWVcgeEyF+++N6GlQQnBVQcmvA7B0Mvv0wQfvmyE2+SOtTYySvBBUayoBE5AcQxi3hiXg2cegKwOdg/iepD9KMibLthbj40MXgn1e88YaS8jmBUIIAtx7rHFDvRugPF8YtbeW8k3nkZxqlZRFr7yqETQIEwKC3o3fbVYOMgV+7l0ep6A0TUKktH5h8YXnQqSzafMqseDnwb2Lu8WaszB/3k887xp9Lc3Q+Wl1p4xJCK5oVvG2oZceTSf7imAFaVubxK7bzUagWdMM3K8lAHy6GIBBuUKl9ePwrZTYITAP724k3XR1mm6Ind0GLmZFUvA3oC5B+G9Jay8Z9MO88r+xSDy5fusvo1ygMxKqUTuOt1amdLlDy0n95O5VxSzTVtGvIk2s3tzF0J7aUtHCqcak63BRTevb5c= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 15:34:55 np0005544708 python3[4559]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 15:34:55 np0005544708 python3[4658]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 15:34:55 np0005544708 python3[4729]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764794095.2719893-207-38996794071089/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=212c8454592e4879b062383806272265_id_rsa follow=False checksum=744ad42b3431e855d34445ed08d0cada55a7c21f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 15:34:56 np0005544708 python3[4852]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 15:34:56 np0005544708 python3[4923]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764794096.1869526-240-235335856868184/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=212c8454592e4879b062383806272265_id_rsa.pub follow=False checksum=afffad914350138a10afbc72b08c3c0848ab6f39 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 15:34:56 np0005544708 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  3 15:34:58 np0005544708 python3[4973]: ansible-ping Invoked with data=pong
Dec  3 15:34:58 np0005544708 python3[4997]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 15:35:00 np0005544708 python3[5055]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Dec  3 15:35:01 np0005544708 python3[5087]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 15:35:01 np0005544708 python3[5111]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 15:35:01 np0005544708 python3[5135]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 15:35:02 np0005544708 python3[5159]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 15:35:02 np0005544708 python3[5183]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 15:35:02 np0005544708 python3[5207]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 15:35:04 np0005544708 python3[5233]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 15:35:04 np0005544708 python3[5311]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 15:35:05 np0005544708 python3[5384]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764794104.3639655-21-41117920850078/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 15:35:05 np0005544708 python3[5432]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 15:35:06 np0005544708 python3[5456]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 15:35:06 np0005544708 python3[5480]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 15:35:06 np0005544708 python3[5504]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 15:35:06 np0005544708 python3[5528]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 15:35:07 np0005544708 python3[5552]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 15:35:07 np0005544708 python3[5576]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 15:35:07 np0005544708 python3[5600]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 15:35:07 np0005544708 python3[5624]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 15:35:08 np0005544708 python3[5648]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 15:35:08 np0005544708 python3[5672]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 15:35:08 np0005544708 python3[5696]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 15:35:09 np0005544708 python3[5720]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 15:35:09 np0005544708 python3[5744]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 15:35:09 np0005544708 python3[5768]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 15:35:09 np0005544708 python3[5792]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 15:35:10 np0005544708 python3[5816]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 15:35:10 np0005544708 python3[5840]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 15:35:10 np0005544708 python3[5864]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 15:35:10 np0005544708 python3[5888]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 15:35:11 np0005544708 python3[5912]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 15:35:11 np0005544708 python3[5936]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 15:35:11 np0005544708 python3[5960]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 15:35:12 np0005544708 python3[5984]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 15:35:12 np0005544708 python3[6008]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 15:35:12 np0005544708 python3[6032]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 15:35:15 np0005544708 python3[6058]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  3 15:35:15 np0005544708 systemd[1]: Starting Time & Date Service...
Dec  3 15:35:15 np0005544708 systemd[1]: Started Time & Date Service.
Dec  3 15:35:15 np0005544708 systemd-timedated[6060]: Changed time zone to 'UTC' (UTC).
Dec  3 15:35:16 np0005544708 python3[6089]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 15:35:16 np0005544708 python3[6165]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 15:35:16 np0005544708 python3[6236]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764794116.31435-153-170510912668961/source _original_basename=tmpta0iq8a5 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 15:35:17 np0005544708 python3[6336]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 15:35:17 np0005544708 python3[6407]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764794117.1395614-183-275094820537795/source _original_basename=tmpv1kyb819 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 15:35:18 np0005544708 python3[6509]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 15:35:18 np0005544708 python3[6582]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764794118.2228136-231-11114428022070/source _original_basename=tmpnej6c4z7 follow=False checksum=873438299bb17ff1128a56bbeb324b7beaf57647 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 15:35:19 np0005544708 python3[6630]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 15:35:19 np0005544708 python3[6656]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 15:35:20 np0005544708 python3[6736]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 15:35:20 np0005544708 python3[6809]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764794119.8686967-273-38026006013058/source _original_basename=tmp38hiw7_r follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 15:35:21 np0005544708 python3[6860]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-63dc-f7bd-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 15:35:21 np0005544708 python3[6888]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-63dc-f7bd-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Dec  3 15:35:22 np0005544708 python3[6916]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 15:35:39 np0005544708 python3[6942]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 15:35:45 np0005544708 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  3 15:36:14 np0005544708 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec  3 15:36:14 np0005544708 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Dec  3 15:36:14 np0005544708 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Dec  3 15:36:14 np0005544708 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Dec  3 15:36:14 np0005544708 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Dec  3 15:36:14 np0005544708 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Dec  3 15:36:14 np0005544708 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Dec  3 15:36:14 np0005544708 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Dec  3 15:36:14 np0005544708 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Dec  3 15:36:14 np0005544708 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Dec  3 15:36:14 np0005544708 NetworkManager[860]: <info>  [1764794174.7450] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  3 15:36:14 np0005544708 systemd-udevd[6945]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 15:36:14 np0005544708 NetworkManager[860]: <info>  [1764794174.7598] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  3 15:36:14 np0005544708 NetworkManager[860]: <info>  [1764794174.7618] settings: (eth1): created default wired connection 'Wired connection 1'
Dec  3 15:36:14 np0005544708 NetworkManager[860]: <info>  [1764794174.7621] device (eth1): carrier: link connected
Dec  3 15:36:14 np0005544708 NetworkManager[860]: <info>  [1764794174.7623] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec  3 15:36:14 np0005544708 NetworkManager[860]: <info>  [1764794174.7628] policy: auto-activating connection 'Wired connection 1' (3fbb02b6-2141-3242-b016-afde93023b71)
Dec  3 15:36:14 np0005544708 NetworkManager[860]: <info>  [1764794174.7631] device (eth1): Activation: starting connection 'Wired connection 1' (3fbb02b6-2141-3242-b016-afde93023b71)
Dec  3 15:36:14 np0005544708 NetworkManager[860]: <info>  [1764794174.7632] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  3 15:36:14 np0005544708 NetworkManager[860]: <info>  [1764794174.7634] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  3 15:36:14 np0005544708 NetworkManager[860]: <info>  [1764794174.7637] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  3 15:36:14 np0005544708 NetworkManager[860]: <info>  [1764794174.7641] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  3 15:36:15 np0005544708 python3[6972]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-ef37-d628-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 15:36:25 np0005544708 python3[7052]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 15:36:25 np0005544708 python3[7125]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764794185.3297126-102-241354330614831/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=d896b078404086040eb34304c2daff8442162aca backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 15:36:26 np0005544708 python3[7175]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 15:36:26 np0005544708 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec  3 15:36:26 np0005544708 systemd[1]: Stopped Network Manager Wait Online.
Dec  3 15:36:26 np0005544708 systemd[1]: Stopping Network Manager Wait Online...
Dec  3 15:36:26 np0005544708 systemd[1]: Stopping Network Manager...
Dec  3 15:36:26 np0005544708 NetworkManager[860]: <info>  [1764794186.8048] caught SIGTERM, shutting down normally.
Dec  3 15:36:26 np0005544708 NetworkManager[860]: <info>  [1764794186.8054] dhcp4 (eth0): canceled DHCP transaction
Dec  3 15:36:26 np0005544708 NetworkManager[860]: <info>  [1764794186.8054] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  3 15:36:26 np0005544708 NetworkManager[860]: <info>  [1764794186.8054] dhcp4 (eth0): state changed no lease
Dec  3 15:36:26 np0005544708 NetworkManager[860]: <info>  [1764794186.8056] manager: NetworkManager state is now CONNECTING
Dec  3 15:36:26 np0005544708 NetworkManager[860]: <info>  [1764794186.8156] dhcp4 (eth1): canceled DHCP transaction
Dec  3 15:36:26 np0005544708 NetworkManager[860]: <info>  [1764794186.8156] dhcp4 (eth1): state changed no lease
Dec  3 15:36:26 np0005544708 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  3 15:36:26 np0005544708 NetworkManager[860]: <info>  [1764794186.8215] exiting (success)
Dec  3 15:36:26 np0005544708 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  3 15:36:26 np0005544708 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec  3 15:36:26 np0005544708 systemd[1]: Stopped Network Manager.
Dec  3 15:36:26 np0005544708 systemd[1]: Starting Network Manager...
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.8654] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:cb12512a-3aa8-4735-9c82-f409c246c155)
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.8657] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.8703] manager[0x560886f5b070]: monitoring kernel firmware directory '/lib/firmware'.
Dec  3 15:36:26 np0005544708 systemd[1]: Starting Hostname Service...
Dec  3 15:36:26 np0005544708 systemd[1]: Started Hostname Service.
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9492] hostname: hostname: using hostnamed
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9495] hostname: static hostname changed from (none) to "np0005544708.novalocal"
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9499] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9504] manager[0x560886f5b070]: rfkill: Wi-Fi hardware radio set enabled
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9505] manager[0x560886f5b070]: rfkill: WWAN hardware radio set enabled
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9534] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9535] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9536] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9536] manager: Networking is enabled by state file
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9538] settings: Loaded settings plugin: keyfile (internal)
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9542] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9572] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9582] dhcp: init: Using DHCP client 'internal'
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9585] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9591] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9597] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9605] device (lo): Activation: starting connection 'lo' (2dc71c9a-e258-42ef-b117-38009802277f)
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9611] device (eth0): carrier: link connected
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9616] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9621] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9621] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9629] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9636] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9642] device (eth1): carrier: link connected
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9646] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9651] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (3fbb02b6-2141-3242-b016-afde93023b71) (indicated)
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9652] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9658] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9665] device (eth1): Activation: starting connection 'Wired connection 1' (3fbb02b6-2141-3242-b016-afde93023b71)
Dec  3 15:36:26 np0005544708 systemd[1]: Started Network Manager.
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9671] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9676] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9679] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9681] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9683] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9686] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9689] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9691] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9695] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9702] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9705] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9713] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9715] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9729] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9733] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9739] device (lo): Activation: successful, device activated.
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9747] dhcp4 (eth0): state changed new lease, address=38.102.83.219
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9754] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  3 15:36:26 np0005544708 systemd[1]: Starting Network Manager Wait Online...
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9819] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9839] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9841] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9844] manager: NetworkManager state is now CONNECTED_SITE
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9849] device (eth0): Activation: successful, device activated.
Dec  3 15:36:26 np0005544708 NetworkManager[7187]: <info>  [1764794186.9854] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  3 15:36:27 np0005544708 python3[7259]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-ef37-d628-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 15:36:37 np0005544708 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  3 15:36:56 np0005544708 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  3 15:37:12 np0005544708 NetworkManager[7187]: <info>  [1764794232.3689] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  3 15:37:12 np0005544708 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  3 15:37:12 np0005544708 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  3 15:37:12 np0005544708 NetworkManager[7187]: <info>  [1764794232.4046] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  3 15:37:12 np0005544708 NetworkManager[7187]: <info>  [1764794232.4048] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  3 15:37:12 np0005544708 NetworkManager[7187]: <info>  [1764794232.4054] device (eth1): Activation: successful, device activated.
Dec  3 15:37:12 np0005544708 NetworkManager[7187]: <info>  [1764794232.4063] manager: startup complete
Dec  3 15:37:12 np0005544708 NetworkManager[7187]: <info>  [1764794232.4067] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Dec  3 15:37:12 np0005544708 NetworkManager[7187]: <warn>  [1764794232.4073] device (eth1): Activation: failed for connection 'Wired connection 1'
Dec  3 15:37:12 np0005544708 NetworkManager[7187]: <info>  [1764794232.4086] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Dec  3 15:37:12 np0005544708 systemd[1]: Finished Network Manager Wait Online.
Dec  3 15:37:12 np0005544708 NetworkManager[7187]: <info>  [1764794232.4186] dhcp4 (eth1): canceled DHCP transaction
Dec  3 15:37:12 np0005544708 NetworkManager[7187]: <info>  [1764794232.4186] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  3 15:37:12 np0005544708 NetworkManager[7187]: <info>  [1764794232.4186] dhcp4 (eth1): state changed no lease
Dec  3 15:37:12 np0005544708 NetworkManager[7187]: <info>  [1764794232.4197] policy: auto-activating connection 'ci-private-network' (82caecc5-1713-50fd-827f-a8910de7f4a3)
Dec  3 15:37:12 np0005544708 NetworkManager[7187]: <info>  [1764794232.4201] device (eth1): Activation: starting connection 'ci-private-network' (82caecc5-1713-50fd-827f-a8910de7f4a3)
Dec  3 15:37:12 np0005544708 NetworkManager[7187]: <info>  [1764794232.4202] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  3 15:37:12 np0005544708 NetworkManager[7187]: <info>  [1764794232.4204] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  3 15:37:12 np0005544708 NetworkManager[7187]: <info>  [1764794232.4209] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  3 15:37:12 np0005544708 NetworkManager[7187]: <info>  [1764794232.4215] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  3 15:37:12 np0005544708 NetworkManager[7187]: <info>  [1764794232.4606] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  3 15:37:12 np0005544708 NetworkManager[7187]: <info>  [1764794232.4608] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  3 15:37:12 np0005544708 NetworkManager[7187]: <info>  [1764794232.4614] device (eth1): Activation: successful, device activated.
Dec  3 15:37:22 np0005544708 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  3 15:37:25 np0005544708 python3[7364]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 15:37:25 np0005544708 python3[7437]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764794244.8579648-267-269166847295127/source _original_basename=tmpttdtq7zu follow=False checksum=04d30233cf826d59f5b8db4451ae2768a3645fb5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 15:37:38 np0005544708 systemd[4301]: Starting Mark boot as successful...
Dec  3 15:37:38 np0005544708 systemd[4301]: Finished Mark boot as successful.
Dec  3 15:38:25 np0005544708 systemd-logind[787]: Session 1 logged out. Waiting for processes to exit.
Dec  3 15:40:38 np0005544708 systemd[4301]: Created slice User Background Tasks Slice.
Dec  3 15:40:38 np0005544708 systemd[4301]: Starting Cleanup of User's Temporary Files and Directories...
Dec  3 15:40:38 np0005544708 systemd[4301]: Finished Cleanup of User's Temporary Files and Directories.
Dec  3 15:42:18 np0005544708 systemd-logind[787]: New session 3 of user zuul.
Dec  3 15:42:18 np0005544708 systemd[1]: Started Session 3 of User zuul.
Dec  3 15:42:18 np0005544708 python3[7497]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-24ce-0cab-000000001cc0-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 15:42:18 np0005544708 python3[7525]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 15:42:19 np0005544708 python3[7551]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 15:42:19 np0005544708 python3[7578]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 15:42:19 np0005544708 python3[7604]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 15:42:20 np0005544708 python3[7630]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 15:42:20 np0005544708 python3[7708]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 15:42:21 np0005544708 python3[7781]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764794540.581451-466-97533957781398/source _original_basename=tmpau87und0 follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 15:42:22 np0005544708 python3[7831]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  3 15:42:22 np0005544708 systemd[1]: Reloading.
Dec  3 15:42:22 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 15:42:23 np0005544708 python3[7887]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Dec  3 15:42:24 np0005544708 python3[7913]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 15:42:24 np0005544708 python3[7941]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 15:42:24 np0005544708 python3[7969]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 15:42:24 np0005544708 python3[7997]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 15:42:25 np0005544708 python3[8024]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-24ce-0cab-000000001cc7-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 15:42:25 np0005544708 python3[8054]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  3 15:42:27 np0005544708 systemd[1]: session-3.scope: Deactivated successfully.
Dec  3 15:42:27 np0005544708 systemd[1]: session-3.scope: Consumed 4.226s CPU time.
Dec  3 15:42:27 np0005544708 systemd-logind[787]: Session 3 logged out. Waiting for processes to exit.
Dec  3 15:42:27 np0005544708 systemd-logind[787]: Removed session 3.
Dec  3 15:42:29 np0005544708 systemd-logind[787]: New session 4 of user zuul.
Dec  3 15:42:29 np0005544708 systemd[1]: Started Session 4 of User zuul.
Dec  3 15:42:29 np0005544708 python3[8088]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  3 15:42:44 np0005544708 kernel: SELinux:  Converting 385 SID table entries...
Dec  3 15:42:44 np0005544708 kernel: SELinux:  policy capability network_peer_controls=1
Dec  3 15:42:44 np0005544708 kernel: SELinux:  policy capability open_perms=1
Dec  3 15:42:44 np0005544708 kernel: SELinux:  policy capability extended_socket_class=1
Dec  3 15:42:44 np0005544708 kernel: SELinux:  policy capability always_check_network=0
Dec  3 15:42:44 np0005544708 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  3 15:42:44 np0005544708 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  3 15:42:44 np0005544708 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  3 15:42:53 np0005544708 kernel: SELinux:  Converting 385 SID table entries...
Dec  3 15:42:53 np0005544708 kernel: SELinux:  policy capability network_peer_controls=1
Dec  3 15:42:53 np0005544708 kernel: SELinux:  policy capability open_perms=1
Dec  3 15:42:53 np0005544708 kernel: SELinux:  policy capability extended_socket_class=1
Dec  3 15:42:53 np0005544708 kernel: SELinux:  policy capability always_check_network=0
Dec  3 15:42:53 np0005544708 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  3 15:42:53 np0005544708 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  3 15:42:53 np0005544708 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  3 15:43:02 np0005544708 kernel: SELinux:  Converting 385 SID table entries...
Dec  3 15:43:02 np0005544708 kernel: SELinux:  policy capability network_peer_controls=1
Dec  3 15:43:02 np0005544708 kernel: SELinux:  policy capability open_perms=1
Dec  3 15:43:02 np0005544708 kernel: SELinux:  policy capability extended_socket_class=1
Dec  3 15:43:02 np0005544708 kernel: SELinux:  policy capability always_check_network=0
Dec  3 15:43:02 np0005544708 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  3 15:43:02 np0005544708 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  3 15:43:02 np0005544708 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  3 15:43:03 np0005544708 setsebool[8154]: The virt_use_nfs policy boolean was changed to 1 by root
Dec  3 15:43:03 np0005544708 setsebool[8154]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Dec  3 15:43:14 np0005544708 kernel: SELinux:  Converting 388 SID table entries...
Dec  3 15:43:14 np0005544708 kernel: SELinux:  policy capability network_peer_controls=1
Dec  3 15:43:14 np0005544708 kernel: SELinux:  policy capability open_perms=1
Dec  3 15:43:14 np0005544708 kernel: SELinux:  policy capability extended_socket_class=1
Dec  3 15:43:14 np0005544708 kernel: SELinux:  policy capability always_check_network=0
Dec  3 15:43:14 np0005544708 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  3 15:43:14 np0005544708 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  3 15:43:14 np0005544708 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  3 15:43:32 np0005544708 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec  3 15:43:32 np0005544708 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  3 15:43:32 np0005544708 systemd[1]: Starting man-db-cache-update.service...
Dec  3 15:43:32 np0005544708 systemd[1]: Reloading.
Dec  3 15:43:32 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 15:43:32 np0005544708 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  3 15:43:39 np0005544708 python3[13894]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-b733-f88c-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 15:43:40 np0005544708 kernel: evm: overlay not supported
Dec  3 15:43:40 np0005544708 systemd[4301]: Starting D-Bus User Message Bus...
Dec  3 15:43:40 np0005544708 dbus-broker-launch[14112]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec  3 15:43:40 np0005544708 dbus-broker-launch[14112]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec  3 15:43:40 np0005544708 systemd[4301]: Started D-Bus User Message Bus.
Dec  3 15:43:40 np0005544708 dbus-broker-lau[14112]: Ready
Dec  3 15:43:40 np0005544708 systemd[4301]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec  3 15:43:40 np0005544708 systemd[4301]: Created slice Slice /user.
Dec  3 15:43:40 np0005544708 systemd[4301]: podman-14045.scope: unit configures an IP firewall, but not running as root.
Dec  3 15:43:40 np0005544708 systemd[4301]: (This warning is only shown for the first unit using IP firewalling.)
Dec  3 15:43:40 np0005544708 systemd[4301]: Started podman-14045.scope.
Dec  3 15:43:40 np0005544708 systemd[4301]: Started podman-pause-17e5c5a3.scope.
Dec  3 15:43:41 np0005544708 python3[14631]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.217:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.217:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 15:43:41 np0005544708 python3[14631]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Dec  3 15:43:41 np0005544708 systemd[1]: session-4.scope: Deactivated successfully.
Dec  3 15:43:41 np0005544708 systemd[1]: session-4.scope: Consumed 59.611s CPU time.
Dec  3 15:43:41 np0005544708 systemd-logind[787]: Session 4 logged out. Waiting for processes to exit.
Dec  3 15:43:41 np0005544708 systemd-logind[787]: Removed session 4.
Dec  3 15:44:20 np0005544708 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  3 15:44:20 np0005544708 systemd[1]: Finished man-db-cache-update.service.
Dec  3 15:44:20 np0005544708 systemd[1]: man-db-cache-update.service: Consumed 58.221s CPU time.
Dec  3 15:44:20 np0005544708 systemd[1]: run-r6e0b67deac2841308050a4b0221d5f07.service: Deactivated successfully.
Dec  3 15:44:24 np0005544708 systemd-logind[787]: New session 5 of user zuul.
Dec  3 15:44:24 np0005544708 systemd[1]: Started Session 5 of User zuul.
Dec  3 15:44:25 np0005544708 python3[29600]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM0hVga1zRkUzrYCe7oc50WLLPKVDkVFkArpF5CarLZ9i3k6P99COH1nadZDO3eIJdFZ/LXbq11sH+72H0chG0g= zuul@np0005544707.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 15:44:25 np0005544708 python3[29626]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM0hVga1zRkUzrYCe7oc50WLLPKVDkVFkArpF5CarLZ9i3k6P99COH1nadZDO3eIJdFZ/LXbq11sH+72H0chG0g= zuul@np0005544707.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 15:44:26 np0005544708 python3[29652]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005544708.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec  3 15:44:26 np0005544708 python3[29686]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM0hVga1zRkUzrYCe7oc50WLLPKVDkVFkArpF5CarLZ9i3k6P99COH1nadZDO3eIJdFZ/LXbq11sH+72H0chG0g= zuul@np0005544707.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  3 15:44:27 np0005544708 python3[29764]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 15:44:27 np0005544708 python3[29837]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764794666.9580677-137-37094700288810/source _original_basename=tmpg7otz_t1 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 15:44:28 np0005544708 python3[29887]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Dec  3 15:44:28 np0005544708 systemd[1]: Starting Hostname Service...
Dec  3 15:44:28 np0005544708 systemd[1]: Started Hostname Service.
Dec  3 15:44:28 np0005544708 systemd-hostnamed[29891]: Changed pretty hostname to 'compute-0'
Dec  3 15:44:28 np0005544708 systemd-hostnamed[29891]: Hostname set to <compute-0> (static)
Dec  3 15:44:28 np0005544708 NetworkManager[7187]: <info>  [1764794668.8887] hostname: static hostname changed from "np0005544708.novalocal" to "compute-0"
Dec  3 15:44:28 np0005544708 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  3 15:44:28 np0005544708 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  3 15:44:29 np0005544708 systemd[1]: session-5.scope: Deactivated successfully.
Dec  3 15:44:29 np0005544708 systemd[1]: session-5.scope: Consumed 2.627s CPU time.
Dec  3 15:44:29 np0005544708 systemd-logind[787]: Session 5 logged out. Waiting for processes to exit.
Dec  3 15:44:29 np0005544708 systemd-logind[787]: Removed session 5.
Dec  3 15:44:38 np0005544708 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  3 15:44:58 np0005544708 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  3 15:48:13 np0005544708 systemd-logind[787]: New session 6 of user zuul.
Dec  3 15:48:13 np0005544708 systemd[1]: Started Session 6 of User zuul.
Dec  3 15:48:14 np0005544708 python3[29990]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 15:48:16 np0005544708 python3[30106]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 15:48:17 np0005544708 python3[30179]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764794896.1794226-33662-64347837682086/source mode=0755 _original_basename=delorean.repo follow=False checksum=39c885eb875fd03e010d1b0454241c26b121dfb2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 15:48:17 np0005544708 python3[30205]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 15:48:17 np0005544708 python3[30278]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764794896.1794226-33662-64347837682086/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 15:48:17 np0005544708 python3[30304]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 15:48:18 np0005544708 python3[30377]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764794896.1794226-33662-64347837682086/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 15:48:18 np0005544708 python3[30403]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 15:48:18 np0005544708 python3[30476]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764794896.1794226-33662-64347837682086/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 15:48:19 np0005544708 python3[30502]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 15:48:19 np0005544708 python3[30575]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764794896.1794226-33662-64347837682086/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 15:48:19 np0005544708 python3[30601]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 15:48:20 np0005544708 python3[30674]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764794896.1794226-33662-64347837682086/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 15:48:20 np0005544708 python3[30700]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 15:48:20 np0005544708 python3[30773]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764794896.1794226-33662-64347837682086/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6e18e2038d54303b4926db53c0b6cced515a9151 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 15:48:32 np0005544708 python3[30831]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 15:49:38 np0005544708 systemd[1]: Starting Cleanup of Temporary Directories...
Dec  3 15:49:38 np0005544708 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Dec  3 15:49:38 np0005544708 systemd[1]: Finished Cleanup of Temporary Directories.
Dec  3 15:49:38 np0005544708 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Dec  3 15:53:31 np0005544708 systemd[1]: session-6.scope: Deactivated successfully.
Dec  3 15:53:31 np0005544708 systemd[1]: session-6.scope: Consumed 5.490s CPU time.
Dec  3 15:53:31 np0005544708 systemd-logind[787]: Session 6 logged out. Waiting for processes to exit.
Dec  3 15:53:31 np0005544708 systemd-logind[787]: Removed session 6.
Dec  3 15:57:38 np0005544708 systemd[1]: Starting dnf makecache...
Dec  3 15:57:38 np0005544708 dnf[30838]: Failed determining last makecache time.
Dec  3 15:57:38 np0005544708 dnf[30838]: delorean-openstack-barbican-42b4c41831408a8e323 287 kB/s |  13 kB     00:00
Dec  3 15:57:38 np0005544708 dnf[30838]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 1.9 MB/s |  65 kB     00:00
Dec  3 15:57:38 np0005544708 dnf[30838]: delorean-openstack-cinder-1c00d6490d88e436f26ef 1.2 MB/s |  32 kB     00:00
Dec  3 15:57:38 np0005544708 dnf[30838]: delorean-python-stevedore-c4acc5639fd2329372142 5.3 MB/s | 131 kB     00:00
Dec  3 15:57:38 np0005544708 dnf[30838]: delorean-python-cloudkitty-tests-tempest-2c80f8 1.5 MB/s |  32 kB     00:00
Dec  3 15:57:38 np0005544708 dnf[30838]: delorean-os-net-config-d0cedbdb788d43e5c7551df5  13 MB/s | 349 kB     00:00
Dec  3 15:57:38 np0005544708 dnf[30838]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 2.0 MB/s |  42 kB     00:00
Dec  3 15:57:38 np0005544708 dnf[30838]: delorean-python-designate-tests-tempest-347fdbc 884 kB/s |  18 kB     00:00
Dec  3 15:57:38 np0005544708 dnf[30838]: delorean-openstack-glance-1fd12c29b339f30fe823e 916 kB/s |  18 kB     00:00
Dec  3 15:57:38 np0005544708 dnf[30838]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 1.5 MB/s |  29 kB     00:00
Dec  3 15:57:38 np0005544708 dnf[30838]: delorean-openstack-manila-3c01b7181572c95dac462 1.3 MB/s |  25 kB     00:00
Dec  3 15:57:38 np0005544708 dnf[30838]: delorean-python-whitebox-neutron-tests-tempest- 6.0 MB/s | 154 kB     00:00
Dec  3 15:57:38 np0005544708 dnf[30838]: delorean-openstack-octavia-ba397f07a7331190208c 1.3 MB/s |  26 kB     00:00
Dec  3 15:57:38 np0005544708 dnf[30838]: delorean-openstack-watcher-c014f81a8647287f6dcc 791 kB/s |  16 kB     00:00
Dec  3 15:57:39 np0005544708 dnf[30838]: delorean-ansible-config_template-5ccaa22121a7ff 350 kB/s | 7.4 kB     00:00
Dec  3 15:57:39 np0005544708 dnf[30838]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 6.5 MB/s | 144 kB     00:00
Dec  3 15:57:39 np0005544708 dnf[30838]: delorean-openstack-swift-dc98a8463506ac520c469a 667 kB/s |  14 kB     00:00
Dec  3 15:57:39 np0005544708 dnf[30838]: delorean-python-tempestconf-8515371b7cceebd4282 2.7 MB/s |  53 kB     00:00
Dec  3 15:57:39 np0005544708 dnf[30838]: delorean-openstack-heat-ui-013accbfd179753bc3f0 4.4 MB/s |  96 kB     00:00
Dec  3 15:57:39 np0005544708 dnf[30838]: CentOS Stream 9 - BaseOS                         62 kB/s | 6.4 kB     00:00
Dec  3 15:57:39 np0005544708 dnf[30838]: CentOS Stream 9 - AppStream                      28 kB/s | 6.5 kB     00:00
Dec  3 15:57:39 np0005544708 dnf[30838]: CentOS Stream 9 - CRB                            74 kB/s | 6.3 kB     00:00
Dec  3 15:57:39 np0005544708 dnf[30838]: CentOS Stream 9 - Extras packages                77 kB/s | 8.3 kB     00:00
Dec  3 15:57:39 np0005544708 dnf[30838]: dlrn-antelope-testing                            31 MB/s | 1.1 MB     00:00
Dec  3 15:57:40 np0005544708 dnf[30838]: dlrn-antelope-build-deps                         17 MB/s | 461 kB     00:00
Dec  3 15:57:40 np0005544708 dnf[30838]: centos9-rabbitmq                                7.5 MB/s | 123 kB     00:00
Dec  3 15:57:40 np0005544708 dnf[30838]: centos9-storage                                  24 MB/s | 415 kB     00:00
Dec  3 15:57:40 np0005544708 dnf[30838]: centos9-opstools                                4.3 MB/s |  51 kB     00:00
Dec  3 15:57:40 np0005544708 dnf[30838]: NFV SIG OpenvSwitch                              20 MB/s | 456 kB     00:00
Dec  3 15:57:41 np0005544708 dnf[30838]: repo-setup-centos-appstream                      87 MB/s |  25 MB     00:00
Dec  3 15:57:47 np0005544708 dnf[30838]: repo-setup-centos-baseos                         77 MB/s | 8.8 MB     00:00
Dec  3 15:57:48 np0005544708 dnf[30838]: repo-setup-centos-highavailability               33 MB/s | 744 kB     00:00
Dec  3 15:57:48 np0005544708 dnf[30838]: repo-setup-centos-powertools                     80 MB/s | 7.3 MB     00:00
Dec  3 15:57:51 np0005544708 dnf[30838]: Extra Packages for Enterprise Linux 9 - x86_64   17 MB/s |  20 MB     00:01
Dec  3 15:58:04 np0005544708 dnf[30838]: Metadata cache created.
Dec  3 15:58:04 np0005544708 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec  3 15:58:04 np0005544708 systemd[1]: Finished dnf makecache.
Dec  3 15:58:04 np0005544708 systemd[1]: dnf-makecache.service: Consumed 24.404s CPU time.
Dec  3 15:59:06 np0005544708 systemd-logind[787]: New session 7 of user zuul.
Dec  3 15:59:06 np0005544708 systemd[1]: Started Session 7 of User zuul.
Dec  3 15:59:07 np0005544708 python3.9[31094]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 15:59:08 np0005544708 python3.9[31275]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 15:59:16 np0005544708 systemd[1]: session-7.scope: Deactivated successfully.
Dec  3 15:59:16 np0005544708 systemd[1]: session-7.scope: Consumed 8.480s CPU time.
Dec  3 15:59:16 np0005544708 systemd-logind[787]: Session 7 logged out. Waiting for processes to exit.
Dec  3 15:59:16 np0005544708 systemd-logind[787]: Removed session 7.
Dec  3 15:59:32 np0005544708 systemd-logind[787]: New session 8 of user zuul.
Dec  3 15:59:32 np0005544708 systemd[1]: Started Session 8 of User zuul.
Dec  3 15:59:33 np0005544708 python3.9[31486]: ansible-ansible.legacy.ping Invoked with data=pong
Dec  3 15:59:34 np0005544708 python3.9[31660]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 15:59:35 np0005544708 python3.9[31812]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 15:59:36 np0005544708 python3.9[31965]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 15:59:37 np0005544708 python3.9[32117]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 15:59:38 np0005544708 python3.9[32269]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 15:59:38 np0005544708 python3.9[32392]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764795577.3647454-73-231380558920240/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 15:59:39 np0005544708 python3.9[32544]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 15:59:40 np0005544708 python3.9[32700]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 15:59:41 np0005544708 python3.9[32852]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 15:59:42 np0005544708 python3.9[33002]: ansible-ansible.builtin.service_facts Invoked
Dec  3 15:59:47 np0005544708 python3.9[33255]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 15:59:48 np0005544708 python3.9[33405]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 15:59:49 np0005544708 python3.9[33559]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 15:59:50 np0005544708 python3.9[33717]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 15:59:51 np0005544708 python3.9[33801]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 16:00:34 np0005544708 systemd[1]: Reloading.
Dec  3 16:00:34 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:00:34 np0005544708 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Dec  3 16:00:35 np0005544708 systemd[1]: Reloading.
Dec  3 16:00:35 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:00:35 np0005544708 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Dec  3 16:00:35 np0005544708 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Dec  3 16:00:35 np0005544708 systemd[1]: Reloading.
Dec  3 16:00:35 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:00:35 np0005544708 systemd[1]: Listening on LVM2 poll daemon socket.
Dec  3 16:00:36 np0005544708 dbus-broker-launch[743]: Noticed file-system modification, trigger reload.
Dec  3 16:00:36 np0005544708 dbus-broker-launch[743]: Noticed file-system modification, trigger reload.
Dec  3 16:00:36 np0005544708 dbus-broker-launch[743]: Noticed file-system modification, trigger reload.
Dec  3 16:01:41 np0005544708 kernel: SELinux:  Converting 2719 SID table entries...
Dec  3 16:01:41 np0005544708 kernel: SELinux:  policy capability network_peer_controls=1
Dec  3 16:01:41 np0005544708 kernel: SELinux:  policy capability open_perms=1
Dec  3 16:01:41 np0005544708 kernel: SELinux:  policy capability extended_socket_class=1
Dec  3 16:01:41 np0005544708 kernel: SELinux:  policy capability always_check_network=0
Dec  3 16:01:41 np0005544708 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  3 16:01:41 np0005544708 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  3 16:01:41 np0005544708 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  3 16:01:41 np0005544708 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Dec  3 16:01:42 np0005544708 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  3 16:01:42 np0005544708 systemd[1]: Starting man-db-cache-update.service...
Dec  3 16:01:42 np0005544708 systemd[1]: Reloading.
Dec  3 16:01:42 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:01:42 np0005544708 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  3 16:01:43 np0005544708 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  3 16:01:43 np0005544708 systemd[1]: Finished man-db-cache-update.service.
Dec  3 16:01:43 np0005544708 systemd[1]: man-db-cache-update.service: Consumed 1.530s CPU time.
Dec  3 16:01:43 np0005544708 systemd[1]: run-rf47e1ca3264549e2adb42401c31b8e1b.service: Deactivated successfully.
Dec  3 16:01:43 np0005544708 python3.9[35339]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:01:45 np0005544708 python3.9[35622]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec  3 16:01:46 np0005544708 python3.9[35774]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec  3 16:01:49 np0005544708 python3.9[35927]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:01:50 np0005544708 python3.9[36079]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec  3 16:01:51 np0005544708 python3.9[36231]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:01:52 np0005544708 python3.9[36383]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:01:53 np0005544708 python3.9[36506]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764795712.016875-236-108060662745938/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=b6113f6c3c3112d11c0348cd0a11619cc2e5f10c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:01:54 np0005544708 python3.9[36658]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:01:55 np0005544708 python3.9[36810]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:01:56 np0005544708 python3.9[36963]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:02:01 np0005544708 python3.9[37115]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec  3 16:02:01 np0005544708 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 16:02:02 np0005544708 python3.9[37269]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  3 16:02:03 np0005544708 python3.9[37427]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  3 16:02:04 np0005544708 python3.9[37587]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec  3 16:02:05 np0005544708 python3.9[37740]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  3 16:02:06 np0005544708 python3.9[37898]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec  3 16:02:07 np0005544708 python3.9[38050]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 16:02:09 np0005544708 python3.9[38203]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:02:10 np0005544708 python3.9[38355]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:02:11 np0005544708 python3.9[38478]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764795729.7922132-355-193119444306434/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:02:12 np0005544708 python3.9[38630]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 16:02:12 np0005544708 systemd[1]: Starting Load Kernel Modules...
Dec  3 16:02:12 np0005544708 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec  3 16:02:12 np0005544708 kernel: Bridge firewalling registered
Dec  3 16:02:12 np0005544708 systemd-modules-load[38634]: Inserted module 'br_netfilter'
Dec  3 16:02:12 np0005544708 systemd[1]: Finished Load Kernel Modules.
Dec  3 16:02:13 np0005544708 python3.9[38789]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:02:13 np0005544708 python3.9[38912]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764795732.521198-378-50133161386334/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:02:14 np0005544708 python3.9[39064]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 16:02:18 np0005544708 dbus-broker-launch[743]: Noticed file-system modification, trigger reload.
Dec  3 16:02:18 np0005544708 dbus-broker-launch[743]: Noticed file-system modification, trigger reload.
Dec  3 16:02:18 np0005544708 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  3 16:02:18 np0005544708 systemd[1]: Starting man-db-cache-update.service...
Dec  3 16:02:18 np0005544708 systemd[1]: Reloading.
Dec  3 16:02:18 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:02:18 np0005544708 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  3 16:02:20 np0005544708 python3.9[40382]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:02:20 np0005544708 python3.9[41282]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec  3 16:02:21 np0005544708 python3.9[42028]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:02:22 np0005544708 python3.9[42912]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:02:22 np0005544708 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  3 16:02:22 np0005544708 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  3 16:02:22 np0005544708 systemd[1]: Finished man-db-cache-update.service.
Dec  3 16:02:22 np0005544708 systemd[1]: man-db-cache-update.service: Consumed 5.367s CPU time.
Dec  3 16:02:22 np0005544708 systemd[1]: run-rb11c4417196d4ddea1d5e4d1102a5dae.service: Deactivated successfully.
Dec  3 16:02:22 np0005544708 systemd[1]: Starting Authorization Manager...
Dec  3 16:02:22 np0005544708 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  3 16:02:23 np0005544708 polkitd[43470]: Started polkitd version 0.117
Dec  3 16:02:23 np0005544708 systemd[1]: Started Authorization Manager.
Dec  3 16:02:24 np0005544708 python3.9[43640]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:02:24 np0005544708 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec  3 16:02:24 np0005544708 systemd[1]: tuned.service: Deactivated successfully.
Dec  3 16:02:24 np0005544708 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec  3 16:02:24 np0005544708 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  3 16:02:24 np0005544708 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  3 16:02:25 np0005544708 python3.9[43802]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec  3 16:02:27 np0005544708 python3.9[43954]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:02:27 np0005544708 systemd[1]: Reloading.
Dec  3 16:02:27 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:02:28 np0005544708 python3.9[44144]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:02:28 np0005544708 systemd[1]: Reloading.
Dec  3 16:02:28 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:02:29 np0005544708 python3.9[44333]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:02:30 np0005544708 python3.9[44486]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:02:30 np0005544708 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Dec  3 16:02:31 np0005544708 python3.9[44641]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:02:33 np0005544708 python3.9[44803]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:02:34 np0005544708 python3.9[44956]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 16:02:34 np0005544708 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec  3 16:02:34 np0005544708 systemd[1]: Stopped Apply Kernel Variables.
Dec  3 16:02:34 np0005544708 systemd[1]: Stopping Apply Kernel Variables...
Dec  3 16:02:34 np0005544708 systemd[1]: Starting Apply Kernel Variables...
Dec  3 16:02:34 np0005544708 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec  3 16:02:34 np0005544708 systemd[1]: Finished Apply Kernel Variables.
Dec  3 16:02:34 np0005544708 systemd[1]: session-8.scope: Deactivated successfully.
Dec  3 16:02:34 np0005544708 systemd[1]: session-8.scope: Consumed 2min 20.167s CPU time.
Dec  3 16:02:34 np0005544708 systemd-logind[787]: Session 8 logged out. Waiting for processes to exit.
Dec  3 16:02:34 np0005544708 systemd-logind[787]: Removed session 8.
Dec  3 16:02:40 np0005544708 systemd-logind[787]: New session 9 of user zuul.
Dec  3 16:02:40 np0005544708 systemd[1]: Started Session 9 of User zuul.
Dec  3 16:02:41 np0005544708 python3.9[45145]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 16:02:42 np0005544708 python3.9[45301]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec  3 16:02:43 np0005544708 python3.9[45454]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  3 16:02:44 np0005544708 python3.9[45614]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  3 16:02:45 np0005544708 python3.9[45774]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 16:02:46 np0005544708 python3.9[45858]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  3 16:02:49 np0005544708 python3.9[46021]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 16:03:00 np0005544708 kernel: SELinux:  Converting 2731 SID table entries...
Dec  3 16:03:00 np0005544708 kernel: SELinux:  policy capability network_peer_controls=1
Dec  3 16:03:00 np0005544708 kernel: SELinux:  policy capability open_perms=1
Dec  3 16:03:00 np0005544708 kernel: SELinux:  policy capability extended_socket_class=1
Dec  3 16:03:00 np0005544708 kernel: SELinux:  policy capability always_check_network=0
Dec  3 16:03:00 np0005544708 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  3 16:03:00 np0005544708 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  3 16:03:00 np0005544708 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  3 16:03:01 np0005544708 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Dec  3 16:03:01 np0005544708 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Dec  3 16:03:02 np0005544708 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  3 16:03:02 np0005544708 systemd[1]: Starting man-db-cache-update.service...
Dec  3 16:03:02 np0005544708 systemd[1]: Reloading.
Dec  3 16:03:02 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:03:02 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:03:03 np0005544708 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  3 16:03:03 np0005544708 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  3 16:03:03 np0005544708 systemd[1]: Finished man-db-cache-update.service.
Dec  3 16:03:03 np0005544708 systemd[1]: run-r2904b5ae85c94a51bf39cda9d8f3f11b.service: Deactivated successfully.
Dec  3 16:03:04 np0005544708 python3.9[47120]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  3 16:03:04 np0005544708 systemd[1]: Reloading.
Dec  3 16:03:05 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:03:05 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:03:05 np0005544708 systemd[1]: Starting Open vSwitch Database Unit...
Dec  3 16:03:05 np0005544708 chown[47162]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Dec  3 16:03:05 np0005544708 ovs-ctl[47167]: /etc/openvswitch/conf.db does not exist ... (warning).
Dec  3 16:03:05 np0005544708 ovs-ctl[47167]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Dec  3 16:03:05 np0005544708 ovs-ctl[47167]: Starting ovsdb-server [  OK  ]
Dec  3 16:03:05 np0005544708 ovs-vsctl[47217]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Dec  3 16:03:05 np0005544708 ovs-vsctl[47233]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"f27c01e7-5b62-4209-a664-3ae50b74644d\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Dec  3 16:03:05 np0005544708 ovs-ctl[47167]: Configuring Open vSwitch system IDs [  OK  ]
Dec  3 16:03:05 np0005544708 ovs-vsctl[47242]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec  3 16:03:05 np0005544708 ovs-ctl[47167]: Enabling remote OVSDB managers [  OK  ]
Dec  3 16:03:05 np0005544708 systemd[1]: Started Open vSwitch Database Unit.
Dec  3 16:03:05 np0005544708 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Dec  3 16:03:05 np0005544708 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Dec  3 16:03:05 np0005544708 systemd[1]: Starting Open vSwitch Forwarding Unit...
Dec  3 16:03:05 np0005544708 kernel: openvswitch: Open vSwitch switching datapath
Dec  3 16:03:05 np0005544708 ovs-ctl[47288]: Inserting openvswitch module [  OK  ]
Dec  3 16:03:05 np0005544708 ovs-ctl[47256]: Starting ovs-vswitchd [  OK  ]
Dec  3 16:03:05 np0005544708 ovs-vsctl[47305]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec  3 16:03:05 np0005544708 ovs-ctl[47256]: Enabling remote OVSDB managers [  OK  ]
Dec  3 16:03:05 np0005544708 systemd[1]: Started Open vSwitch Forwarding Unit.
Dec  3 16:03:05 np0005544708 systemd[1]: Starting Open vSwitch...
Dec  3 16:03:05 np0005544708 systemd[1]: Finished Open vSwitch.
Dec  3 16:03:06 np0005544708 python3.9[47457]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 16:03:07 np0005544708 python3.9[47609]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec  3 16:03:08 np0005544708 kernel: SELinux:  Converting 2745 SID table entries...
Dec  3 16:03:09 np0005544708 kernel: SELinux:  policy capability network_peer_controls=1
Dec  3 16:03:09 np0005544708 kernel: SELinux:  policy capability open_perms=1
Dec  3 16:03:09 np0005544708 kernel: SELinux:  policy capability extended_socket_class=1
Dec  3 16:03:09 np0005544708 kernel: SELinux:  policy capability always_check_network=0
Dec  3 16:03:09 np0005544708 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  3 16:03:09 np0005544708 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  3 16:03:09 np0005544708 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  3 16:03:09 np0005544708 python3.9[47765]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 16:03:10 np0005544708 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Dec  3 16:03:10 np0005544708 python3.9[47923]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 16:03:12 np0005544708 python3.9[48076]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:03:15 np0005544708 python3.9[48363]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  3 16:03:16 np0005544708 python3.9[48513]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:03:17 np0005544708 python3.9[48667]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 16:03:19 np0005544708 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  3 16:03:19 np0005544708 systemd[1]: Starting man-db-cache-update.service...
Dec  3 16:03:19 np0005544708 systemd[1]: Reloading.
Dec  3 16:03:19 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:03:19 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:03:19 np0005544708 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  3 16:03:20 np0005544708 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  3 16:03:20 np0005544708 systemd[1]: Finished man-db-cache-update.service.
Dec  3 16:03:20 np0005544708 systemd[1]: run-re08e8272296d4b86958b9aa36390917e.service: Deactivated successfully.
Dec  3 16:03:20 np0005544708 python3.9[48985]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 16:03:20 np0005544708 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec  3 16:03:20 np0005544708 systemd[1]: Stopped Network Manager Wait Online.
Dec  3 16:03:20 np0005544708 systemd[1]: Stopping Network Manager Wait Online...
Dec  3 16:03:20 np0005544708 systemd[1]: Stopping Network Manager...
Dec  3 16:03:20 np0005544708 NetworkManager[7187]: <info>  [1764795800.9402] caught SIGTERM, shutting down normally.
Dec  3 16:03:20 np0005544708 NetworkManager[7187]: <info>  [1764795800.9422] dhcp4 (eth0): canceled DHCP transaction
Dec  3 16:03:20 np0005544708 NetworkManager[7187]: <info>  [1764795800.9422] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  3 16:03:20 np0005544708 NetworkManager[7187]: <info>  [1764795800.9422] dhcp4 (eth0): state changed no lease
Dec  3 16:03:20 np0005544708 NetworkManager[7187]: <info>  [1764795800.9426] manager: NetworkManager state is now CONNECTED_SITE
Dec  3 16:03:20 np0005544708 NetworkManager[7187]: <info>  [1764795800.9499] exiting (success)
Dec  3 16:03:20 np0005544708 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  3 16:03:20 np0005544708 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  3 16:03:20 np0005544708 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec  3 16:03:20 np0005544708 systemd[1]: Stopped Network Manager.
Dec  3 16:03:20 np0005544708 systemd[1]: NetworkManager.service: Consumed 11.162s CPU time, 4.1M memory peak, read 0B from disk, written 35.5K to disk.
Dec  3 16:03:20 np0005544708 systemd[1]: Starting Network Manager...
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.0285] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:cb12512a-3aa8-4735-9c82-f409c246c155)
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.0286] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.0353] manager[0x5653f81e0090]: monitoring kernel firmware directory '/lib/firmware'.
Dec  3 16:03:21 np0005544708 systemd[1]: Starting Hostname Service...
Dec  3 16:03:21 np0005544708 systemd[1]: Started Hostname Service.
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1578] hostname: hostname: using hostnamed
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1579] hostname: static hostname changed from (none) to "compute-0"
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1588] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1598] manager[0x5653f81e0090]: rfkill: Wi-Fi hardware radio set enabled
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1598] manager[0x5653f81e0090]: rfkill: WWAN hardware radio set enabled
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1639] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1655] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1656] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1658] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1659] manager: Networking is enabled by state file
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1663] settings: Loaded settings plugin: keyfile (internal)
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1669] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1711] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1725] dhcp: init: Using DHCP client 'internal'
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1730] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1738] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1749] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1763] device (lo): Activation: starting connection 'lo' (2dc71c9a-e258-42ef-b117-38009802277f)
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1775] device (eth0): carrier: link connected
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1782] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1790] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1791] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1801] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1811] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1820] device (eth1): carrier: link connected
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1828] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1835] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (82caecc5-1713-50fd-827f-a8910de7f4a3) (indicated)
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1836] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1844] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1855] device (eth1): Activation: starting connection 'ci-private-network' (82caecc5-1713-50fd-827f-a8910de7f4a3)
Dec  3 16:03:21 np0005544708 systemd[1]: Started Network Manager.
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1865] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1879] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1883] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1887] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1891] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1896] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1900] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1903] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1909] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1921] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1926] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1940] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1962] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1979] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1986] dhcp4 (eth0): state changed new lease, address=38.102.83.219
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1990] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.1998] device (lo): Activation: successful, device activated.
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.2021] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  3 16:03:21 np0005544708 systemd[1]: Starting Network Manager Wait Online...
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.2239] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.2248] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.2251] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.2255] manager: NetworkManager state is now CONNECTED_LOCAL
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.2261] device (eth1): Activation: successful, device activated.
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.2312] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.2314] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.2319] manager: NetworkManager state is now CONNECTED_SITE
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.2325] device (eth0): Activation: successful, device activated.
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.2334] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  3 16:03:21 np0005544708 NetworkManager[48996]: <info>  [1764795801.2383] manager: startup complete
Dec  3 16:03:21 np0005544708 systemd[1]: Finished Network Manager Wait Online.
Dec  3 16:03:21 np0005544708 python3.9[49212]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 16:03:26 np0005544708 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  3 16:03:26 np0005544708 systemd[1]: Starting man-db-cache-update.service...
Dec  3 16:03:26 np0005544708 systemd[1]: Reloading.
Dec  3 16:03:26 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:03:26 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:03:26 np0005544708 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  3 16:03:27 np0005544708 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  3 16:03:27 np0005544708 systemd[1]: Finished man-db-cache-update.service.
Dec  3 16:03:27 np0005544708 systemd[1]: run-rd7a8bc2f491843aea0b63b3d825837d2.service: Deactivated successfully.
Dec  3 16:03:28 np0005544708 python3.9[49671]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:03:29 np0005544708 python3.9[49823]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:03:30 np0005544708 python3.9[49977]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:03:30 np0005544708 python3.9[50129]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:03:31 np0005544708 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  3 16:03:31 np0005544708 python3.9[50281]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:03:32 np0005544708 python3.9[50433]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:03:32 np0005544708 python3.9[50585]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:03:33 np0005544708 python3.9[50708]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764795812.5533903-229-207182494871959/.source _original_basename=.a_cf4f79 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:03:34 np0005544708 python3.9[50860]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:03:35 np0005544708 python3.9[51012]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Dec  3 16:03:35 np0005544708 python3.9[51164]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:03:38 np0005544708 python3.9[51591]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Dec  3 16:03:39 np0005544708 ansible-async_wrapper.py[51766]: Invoked with j632954729244 300 /home/zuul/.ansible/tmp/ansible-tmp-1764795818.2231066-295-252145472369764/AnsiballZ_edpm_os_net_config.py _
Dec  3 16:03:39 np0005544708 ansible-async_wrapper.py[51769]: Starting module and watcher
Dec  3 16:03:39 np0005544708 ansible-async_wrapper.py[51769]: Start watching 51770 (300)
Dec  3 16:03:39 np0005544708 ansible-async_wrapper.py[51770]: Start module (51770)
Dec  3 16:03:39 np0005544708 ansible-async_wrapper.py[51766]: Return async_wrapper task started.
Dec  3 16:03:39 np0005544708 python3.9[51771]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Dec  3 16:03:39 np0005544708 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Dec  3 16:03:39 np0005544708 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Dec  3 16:03:39 np0005544708 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Dec  3 16:03:39 np0005544708 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Dec  3 16:03:39 np0005544708 kernel: cfg80211: failed to load regulatory.db
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.0655] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51772 uid=0 result="success"
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.0673] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51772 uid=0 result="success"
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1162] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1163] audit: op="connection-add" uuid="8c167eb1-5903-4c8a-9b7a-1f7180590fe2" name="br-ex-br" pid=51772 uid=0 result="success"
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1177] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1179] audit: op="connection-add" uuid="3c63e537-d723-49f2-8a13-2ec654a09dab" name="br-ex-port" pid=51772 uid=0 result="success"
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1191] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1201] audit: op="connection-add" uuid="edbf8d0d-f777-4722-b65c-923d33f3c339" name="eth1-port" pid=51772 uid=0 result="success"
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1211] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1212] audit: op="connection-add" uuid="f7a9ac4a-94ea-4a62-a8b4-d1c718c840d2" name="vlan20-port" pid=51772 uid=0 result="success"
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1222] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1224] audit: op="connection-add" uuid="2733bc75-1892-48e8-8bdf-7fb6bb8c6608" name="vlan21-port" pid=51772 uid=0 result="success"
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1234] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1236] audit: op="connection-add" uuid="ceacbaec-a9b5-48f3-8f85-bbdc1fa6a57b" name="vlan22-port" pid=51772 uid=0 result="success"
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1246] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1247] audit: op="connection-add" uuid="b2408982-c017-4a7a-9d60-128e2abc7a05" name="vlan23-port" pid=51772 uid=0 result="success"
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1266] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.addr-gen-mode,ipv6.method,ipv6.dhcp-timeout,802-3-ethernet.mtu,connection.autoconnect-priority,connection.timestamp" pid=51772 uid=0 result="success"
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1280] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1282] audit: op="connection-add" uuid="9434fda8-8961-4e83-8d0a-2530a2533efc" name="br-ex-if" pid=51772 uid=0 result="success"
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1347] audit: op="connection-update" uuid="82caecc5-1713-50fd-827f-a8910de7f4a3" name="ci-private-network" args="ipv4.dns,ipv4.addresses,ipv4.method,ipv4.never-default,ipv4.routes,ipv4.routing-rules,ovs-interface.type,ipv6.addr-gen-mode,ipv6.addresses,ipv6.method,ipv6.dns,ipv6.routes,ipv6.routing-rules,connection.controller,connection.master,connection.port-type,connection.timestamp,connection.slave-type,ovs-external-ids.data" pid=51772 uid=0 result="success"
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1362] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1364] audit: op="connection-add" uuid="8fb07113-a063-480b-985d-a6f24fbd45d4" name="vlan20-if" pid=51772 uid=0 result="success"
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1378] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1379] audit: op="connection-add" uuid="971d684f-fc1b-4aff-916f-5b92f8e6d9a9" name="vlan21-if" pid=51772 uid=0 result="success"
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1393] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1394] audit: op="connection-add" uuid="4e702b00-3924-42dc-8632-35b8cbca1464" name="vlan22-if" pid=51772 uid=0 result="success"
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1408] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1409] audit: op="connection-add" uuid="f611a32b-045a-40ae-aafd-5036a92f60e4" name="vlan23-if" pid=51772 uid=0 result="success"
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1421] audit: op="connection-delete" uuid="3fbb02b6-2141-3242-b016-afde93023b71" name="Wired connection 1" pid=51772 uid=0 result="success"
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1431] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1440] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1444] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (8c167eb1-5903-4c8a-9b7a-1f7180590fe2)
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1444] audit: op="connection-activate" uuid="8c167eb1-5903-4c8a-9b7a-1f7180590fe2" name="br-ex-br" pid=51772 uid=0 result="success"
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1446] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1452] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1456] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (3c63e537-d723-49f2-8a13-2ec654a09dab)
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1458] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1463] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1467] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (edbf8d0d-f777-4722-b65c-923d33f3c339)
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1468] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1474] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1478] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (f7a9ac4a-94ea-4a62-a8b4-d1c718c840d2)
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1480] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1486] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1489] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (2733bc75-1892-48e8-8bdf-7fb6bb8c6608)
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1490] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1497] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1500] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (ceacbaec-a9b5-48f3-8f85-bbdc1fa6a57b)
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1502] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1508] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1511] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (b2408982-c017-4a7a-9d60-128e2abc7a05)
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1512] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1514] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1516] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1522] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1526] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1530] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (9434fda8-8961-4e83-8d0a-2530a2533efc)
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1531] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1534] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1536] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1537] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1538] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1547] device (eth1): disconnecting for new activation request.
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1548] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1550] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1552] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1552] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1555] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1559] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1563] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (8fb07113-a063-480b-985d-a6f24fbd45d4)
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1564] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1567] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1569] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1571] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1573] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1579] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1584] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (971d684f-fc1b-4aff-916f-5b92f8e6d9a9)
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1585] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1589] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1591] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1592] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1596] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1601] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1605] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (4e702b00-3924-42dc-8632-35b8cbca1464)
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1606] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1609] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1611] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1613] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1616] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1620] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1624] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (f611a32b-045a-40ae-aafd-5036a92f60e4)
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1625] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1628] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1630] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1631] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1632] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1643] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu,connection.autoconnect-priority" pid=51772 uid=0 result="success"
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1645] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1648] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1650] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1656] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1659] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1663] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1665] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1667] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1672] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 kernel: ovs-system: entered promiscuous mode
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1677] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1680] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1682] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1687] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1690] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1693] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1695] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 systemd-udevd[51777]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 16:03:41 np0005544708 kernel: Timeout policy base is empty
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1700] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1704] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1707] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1709] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1714] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1718] dhcp4 (eth0): canceled DHCP transaction
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1719] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1719] dhcp4 (eth0): state changed no lease
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1720] dhcp4 (eth0): activation: beginning transaction (no timeout)
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1731] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1735] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51772 uid=0 result="fail" reason="Device is not activated"
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1739] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Dec  3 16:03:41 np0005544708 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1773] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1781] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1785] dhcp4 (eth0): state changed new lease, address=38.102.83.219
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1790] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1830] device (eth1): disconnecting for new activation request.
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1831] audit: op="connection-activate" uuid="82caecc5-1713-50fd-827f-a8910de7f4a3" name="ci-private-network" pid=51772 uid=0 result="success"
Dec  3 16:03:41 np0005544708 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.1916] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2021] device (eth1): Activation: starting connection 'ci-private-network' (82caecc5-1713-50fd-827f-a8910de7f4a3)
Dec  3 16:03:41 np0005544708 kernel: br-ex: entered promiscuous mode
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2026] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2027] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51772 uid=0 result="success"
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2028] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2028] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2029] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2030] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2032] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2033] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2037] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2039] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2042] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2045] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2048] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2051] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2054] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2058] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2061] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2063] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2066] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2070] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2073] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2077] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2080] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2084] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2091] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2102] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Dec  3 16:03:41 np0005544708 kernel: vlan22: entered promiscuous mode
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2106] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 systemd-udevd[51776]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 16:03:41 np0005544708 kernel: vlan23: entered promiscuous mode
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2176] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2178] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 systemd-udevd[51778]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2182] device (eth1): Activation: successful, device activated.
Dec  3 16:03:41 np0005544708 kernel: vlan20: entered promiscuous mode
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2269] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2274] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2282] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2299] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2306] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2324] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 kernel: vlan21: entered promiscuous mode
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2333] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2334] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2339] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2344] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2350] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2355] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  3 16:03:41 np0005544708 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2394] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2395] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2402] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2409] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2425] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2458] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2459] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2461] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2465] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2479] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2523] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2524] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  3 16:03:41 np0005544708 NetworkManager[48996]: <info>  [1764795821.2528] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  3 16:03:42 np0005544708 NetworkManager[48996]: <info>  [1764795822.3667] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51772 uid=0 result="success"
Dec  3 16:03:42 np0005544708 NetworkManager[48996]: <info>  [1764795822.5879] checkpoint[0x5653f81b5950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Dec  3 16:03:42 np0005544708 NetworkManager[48996]: <info>  [1764795822.5881] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51772 uid=0 result="success"
Dec  3 16:03:42 np0005544708 python3.9[52131]: ansible-ansible.legacy.async_status Invoked with jid=j632954729244.51766 mode=status _async_dir=/root/.ansible_async
Dec  3 16:03:42 np0005544708 NetworkManager[48996]: <info>  [1764795822.8789] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51772 uid=0 result="success"
Dec  3 16:03:42 np0005544708 NetworkManager[48996]: <info>  [1764795822.8803] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51772 uid=0 result="success"
Dec  3 16:03:43 np0005544708 NetworkManager[48996]: <info>  [1764795823.0991] audit: op="networking-control" arg="global-dns-configuration" pid=51772 uid=0 result="success"
Dec  3 16:03:43 np0005544708 NetworkManager[48996]: <info>  [1764795823.3062] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Dec  3 16:03:43 np0005544708 NetworkManager[48996]: <info>  [1764795823.3694] audit: op="networking-control" arg="global-dns-configuration" pid=51772 uid=0 result="success"
Dec  3 16:03:43 np0005544708 NetworkManager[48996]: <info>  [1764795823.4122] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51772 uid=0 result="success"
Dec  3 16:03:43 np0005544708 NetworkManager[48996]: <info>  [1764795823.6216] checkpoint[0x5653f81b5a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Dec  3 16:03:43 np0005544708 NetworkManager[48996]: <info>  [1764795823.6222] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51772 uid=0 result="success"
Dec  3 16:03:43 np0005544708 ansible-async_wrapper.py[51770]: Module complete (51770)
Dec  3 16:03:44 np0005544708 ansible-async_wrapper.py[51769]: Done in kid B.
Dec  3 16:03:46 np0005544708 python3.9[52236]: ansible-ansible.legacy.async_status Invoked with jid=j632954729244.51766 mode=status _async_dir=/root/.ansible_async
Dec  3 16:03:46 np0005544708 python3.9[52336]: ansible-ansible.legacy.async_status Invoked with jid=j632954729244.51766 mode=cleanup _async_dir=/root/.ansible_async
Dec  3 16:03:47 np0005544708 python3.9[52488]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:03:48 np0005544708 python3.9[52611]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764795827.0623922-322-128559723055906/.source.returncode _original_basename=.6b49ecuh follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:03:48 np0005544708 python3.9[52763]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:03:49 np0005544708 python3.9[52886]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764795828.4498706-338-12156648532113/.source.cfg _original_basename=.7avz18o5 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:03:50 np0005544708 python3.9[53039]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 16:03:50 np0005544708 systemd[1]: Reloading Network Manager...
Dec  3 16:03:50 np0005544708 NetworkManager[48996]: <info>  [1764795830.3900] audit: op="reload" arg="0" pid=53043 uid=0 result="success"
Dec  3 16:03:50 np0005544708 NetworkManager[48996]: <info>  [1764795830.3909] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Dec  3 16:03:50 np0005544708 systemd[1]: Reloaded Network Manager.
Dec  3 16:03:50 np0005544708 systemd[1]: session-9.scope: Deactivated successfully.
Dec  3 16:03:50 np0005544708 systemd[1]: session-9.scope: Consumed 49.984s CPU time.
Dec  3 16:03:50 np0005544708 systemd-logind[787]: Session 9 logged out. Waiting for processes to exit.
Dec  3 16:03:50 np0005544708 systemd-logind[787]: Removed session 9.
Dec  3 16:03:51 np0005544708 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  3 16:03:56 np0005544708 systemd-logind[787]: New session 10 of user zuul.
Dec  3 16:03:56 np0005544708 systemd[1]: Started Session 10 of User zuul.
Dec  3 16:03:57 np0005544708 python3.9[53229]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 16:03:58 np0005544708 python3.9[53383]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 16:03:59 np0005544708 python3.9[53576]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:03:59 np0005544708 systemd[1]: session-10.scope: Deactivated successfully.
Dec  3 16:03:59 np0005544708 systemd[1]: session-10.scope: Consumed 2.391s CPU time.
Dec  3 16:03:59 np0005544708 systemd-logind[787]: Session 10 logged out. Waiting for processes to exit.
Dec  3 16:03:59 np0005544708 systemd-logind[787]: Removed session 10.
Dec  3 16:04:00 np0005544708 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  3 16:04:07 np0005544708 systemd-logind[787]: New session 11 of user zuul.
Dec  3 16:04:07 np0005544708 systemd[1]: Started Session 11 of User zuul.
Dec  3 16:04:08 np0005544708 python3.9[53759]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 16:04:09 np0005544708 python3.9[53914]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 16:04:10 np0005544708 python3.9[54070]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 16:04:11 np0005544708 python3.9[54155]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 16:04:13 np0005544708 python3.9[54311]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 16:04:14 np0005544708 python3.9[54506]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:04:15 np0005544708 python3.9[54658]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:04:15 np0005544708 systemd[1]: var-lib-containers-storage-overlay-compat1766208549-merged.mount: Deactivated successfully.
Dec  3 16:04:15 np0005544708 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck1198909937-merged.mount: Deactivated successfully.
Dec  3 16:04:15 np0005544708 podman[54659]: 2025-12-03 21:04:15.9432519 +0000 UTC m=+0.073472754 system refresh
Dec  3 16:04:15 np0005544708 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  3 16:04:16 np0005544708 python3.9[54821]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:04:17 np0005544708 python3.9[54944]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764795856.179343-79-84528925027707/.source.json follow=False _original_basename=podman_network_config.j2 checksum=9bc2e1a602f29a322097501b442d6abeaac1c740 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:04:18 np0005544708 python3.9[55096]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:04:18 np0005544708 python3.9[55219]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764795857.8341827-94-207874164122947/.source.conf follow=False _original_basename=registries.conf.j2 checksum=8c73fbc0d7cddf5b89d40cde842a385025fa8102 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:04:19 np0005544708 python3.9[55371]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:04:20 np0005544708 python3.9[55523]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:04:21 np0005544708 python3.9[55675]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:04:21 np0005544708 python3.9[55827]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:04:22 np0005544708 python3.9[55979]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 16:04:25 np0005544708 python3.9[56132]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 16:04:25 np0005544708 python3.9[56286]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:04:26 np0005544708 python3.9[56438]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:04:27 np0005544708 python3.9[56590]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:04:28 np0005544708 python3.9[56743]: ansible-service_facts Invoked
Dec  3 16:04:28 np0005544708 network[56760]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  3 16:04:28 np0005544708 network[56761]: 'network-scripts' will be removed from distribution in near future.
Dec  3 16:04:28 np0005544708 network[56762]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  3 16:04:34 np0005544708 python3.9[57214]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 16:04:37 np0005544708 python3.9[57367]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec  3 16:04:38 np0005544708 python3.9[57519]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:04:38 np0005544708 python3.9[57644]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764795877.8040223-238-101484405873619/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:04:39 np0005544708 python3.9[57798]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:04:40 np0005544708 python3.9[57923]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764795879.1879485-253-231961925731425/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:04:41 np0005544708 python3.9[58077]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:04:42 np0005544708 python3.9[58231]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 16:04:44 np0005544708 python3.9[58315]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:04:45 np0005544708 python3.9[58469]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 16:04:46 np0005544708 python3.9[58553]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 16:04:46 np0005544708 chronyd[798]: chronyd exiting
Dec  3 16:04:46 np0005544708 systemd[1]: Stopping NTP client/server...
Dec  3 16:04:46 np0005544708 systemd[1]: chronyd.service: Deactivated successfully.
Dec  3 16:04:46 np0005544708 systemd[1]: Stopped NTP client/server.
Dec  3 16:04:46 np0005544708 systemd[1]: Starting NTP client/server...
Dec  3 16:04:46 np0005544708 chronyd[58561]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec  3 16:04:46 np0005544708 chronyd[58561]: Frequency -26.368 +/- 0.290 ppm read from /var/lib/chrony/drift
Dec  3 16:04:46 np0005544708 chronyd[58561]: Loaded seccomp filter (level 2)
Dec  3 16:04:46 np0005544708 systemd[1]: Started NTP client/server.
Dec  3 16:04:46 np0005544708 systemd[1]: session-11.scope: Deactivated successfully.
Dec  3 16:04:46 np0005544708 systemd[1]: session-11.scope: Consumed 27.574s CPU time.
Dec  3 16:04:46 np0005544708 systemd-logind[787]: Session 11 logged out. Waiting for processes to exit.
Dec  3 16:04:46 np0005544708 systemd-logind[787]: Removed session 11.
Dec  3 16:04:51 np0005544708 systemd-logind[787]: New session 12 of user zuul.
Dec  3 16:04:51 np0005544708 systemd[1]: Started Session 12 of User zuul.
Dec  3 16:04:52 np0005544708 python3.9[58742]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:04:53 np0005544708 python3.9[58894]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:04:54 np0005544708 python3.9[59017]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764795892.9113667-34-41276866286450/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:04:54 np0005544708 systemd[1]: session-12.scope: Deactivated successfully.
Dec  3 16:04:54 np0005544708 systemd[1]: session-12.scope: Consumed 1.761s CPU time.
Dec  3 16:04:54 np0005544708 systemd-logind[787]: Session 12 logged out. Waiting for processes to exit.
Dec  3 16:04:54 np0005544708 systemd-logind[787]: Removed session 12.
Dec  3 16:05:01 np0005544708 systemd-logind[787]: New session 13 of user zuul.
Dec  3 16:05:01 np0005544708 systemd[1]: Started Session 13 of User zuul.
Dec  3 16:05:02 np0005544708 python3.9[59195]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 16:05:03 np0005544708 python3.9[59351]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:05:04 np0005544708 python3.9[59526]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:05:05 np0005544708 python3.9[59649]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764795903.5440652-41-15210904089634/.source.json _original_basename=.yirh0_ms follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:05:05 np0005544708 python3.9[59801]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:05:06 np0005544708 python3.9[59924]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764795905.4087315-64-118928620769666/.source _original_basename=.hgzoijt1 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:05:07 np0005544708 python3.9[60076]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:05:07 np0005544708 python3.9[60228]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:05:08 np0005544708 python3.9[60351]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764795907.3561344-88-133653242250833/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:05:09 np0005544708 python3.9[60503]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:05:09 np0005544708 python3.9[60626]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764795908.6217518-88-143673909519525/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:05:10 np0005544708 python3.9[60778]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:05:11 np0005544708 python3.9[60930]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:05:11 np0005544708 python3.9[61053]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764795910.713527-125-40185827812188/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:05:12 np0005544708 python3.9[61205]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:05:13 np0005544708 python3.9[61328]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764795912.173176-140-26140985187515/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:05:14 np0005544708 python3.9[61480]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:05:14 np0005544708 systemd[1]: Reloading.
Dec  3 16:05:14 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:05:14 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:05:14 np0005544708 systemd[1]: Reloading.
Dec  3 16:05:14 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:05:14 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:05:15 np0005544708 systemd[1]: Starting EDPM Container Shutdown...
Dec  3 16:05:15 np0005544708 systemd[1]: Finished EDPM Container Shutdown.
Dec  3 16:05:15 np0005544708 python3.9[61709]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:05:16 np0005544708 python3.9[61832]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764795915.3151317-163-225910520478162/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:05:17 np0005544708 python3.9[61984]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:05:17 np0005544708 python3.9[62107]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764795916.6644263-178-142760279748827/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:05:18 np0005544708 python3.9[62259]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:05:18 np0005544708 systemd[1]: Reloading.
Dec  3 16:05:18 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:05:18 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:05:18 np0005544708 systemd[1]: Reloading.
Dec  3 16:05:18 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:05:18 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:05:19 np0005544708 systemd[1]: Starting Create netns directory...
Dec  3 16:05:19 np0005544708 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  3 16:05:19 np0005544708 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  3 16:05:19 np0005544708 systemd[1]: Finished Create netns directory.
Dec  3 16:05:20 np0005544708 python3.9[62485]: ansible-ansible.builtin.service_facts Invoked
Dec  3 16:05:20 np0005544708 network[62502]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  3 16:05:20 np0005544708 network[62503]: 'network-scripts' will be removed from distribution in near future.
Dec  3 16:05:20 np0005544708 network[62504]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  3 16:05:24 np0005544708 python3.9[62770]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:05:24 np0005544708 systemd[1]: Reloading.
Dec  3 16:05:24 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:05:24 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:05:24 np0005544708 systemd[1]: Stopping IPv4 firewall with iptables...
Dec  3 16:05:24 np0005544708 iptables.init[62810]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Dec  3 16:05:24 np0005544708 iptables.init[62810]: iptables: Flushing firewall rules: [  OK  ]
Dec  3 16:05:24 np0005544708 systemd[1]: iptables.service: Deactivated successfully.
Dec  3 16:05:24 np0005544708 systemd[1]: Stopped IPv4 firewall with iptables.
Dec  3 16:05:25 np0005544708 python3.9[63008]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:05:26 np0005544708 python3.9[63162]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:05:26 np0005544708 systemd[1]: Reloading.
Dec  3 16:05:26 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:05:26 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:05:27 np0005544708 systemd[1]: Starting Netfilter Tables...
Dec  3 16:05:27 np0005544708 systemd[1]: Finished Netfilter Tables.
Dec  3 16:05:27 np0005544708 python3.9[63356]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:05:29 np0005544708 python3.9[63509]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:05:29 np0005544708 python3.9[63635]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764795928.396085-247-33490062234291/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:05:30 np0005544708 python3.9[63789]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 16:05:30 np0005544708 systemd[1]: Reloading OpenSSH server daemon...
Dec  3 16:05:30 np0005544708 systemd[1]: Reloaded OpenSSH server daemon.
Dec  3 16:05:31 np0005544708 python3.9[63945]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:05:32 np0005544708 python3.9[64097]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:05:32 np0005544708 python3.9[64220]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764795931.533693-278-204717928591321/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:05:33 np0005544708 python3.9[64372]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  3 16:05:33 np0005544708 systemd[1]: Starting Time & Date Service...
Dec  3 16:05:33 np0005544708 systemd[1]: Started Time & Date Service.
Dec  3 16:05:34 np0005544708 python3.9[64528]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:05:35 np0005544708 python3.9[64680]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:05:35 np0005544708 python3.9[64803]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764795934.8472922-313-188161018391394/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:05:36 np0005544708 python3.9[64955]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:05:37 np0005544708 python3.9[65078]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764795936.0547075-328-67765121931819/.source.yaml _original_basename=.esxxuqjv follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:05:38 np0005544708 python3.9[65230]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:05:38 np0005544708 python3.9[65353]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764795937.4544232-343-266905661443422/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:05:39 np0005544708 python3.9[65505]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:05:40 np0005544708 python3.9[65658]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:05:41 np0005544708 python3[65811]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  3 16:05:41 np0005544708 python3.9[65963]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:05:42 np0005544708 python3.9[66086]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764795941.24429-382-209878639625021/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:05:43 np0005544708 python3.9[66238]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:05:43 np0005544708 python3.9[66361]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764795942.5357482-397-24251246083243/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:05:44 np0005544708 python3.9[66513]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:05:45 np0005544708 python3.9[66636]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764795943.7905405-412-280377068001652/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:05:45 np0005544708 python3.9[66788]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:05:46 np0005544708 python3.9[66911]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764795945.3336186-427-96654835139848/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:05:47 np0005544708 python3.9[67063]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:05:47 np0005544708 python3.9[67186]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764795946.6427174-442-20870292554775/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:05:48 np0005544708 python3.9[67338]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:05:49 np0005544708 python3.9[67490]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:05:50 np0005544708 python3.9[67649]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:05:51 np0005544708 python3.9[67802]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:05:51 np0005544708 python3.9[67954]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:05:52 np0005544708 python3.9[68106]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  3 16:05:53 np0005544708 python3.9[68259]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  3 16:05:54 np0005544708 systemd[1]: session-13.scope: Deactivated successfully.
Dec  3 16:05:54 np0005544708 systemd[1]: session-13.scope: Consumed 39.295s CPU time.
Dec  3 16:05:54 np0005544708 systemd-logind[787]: Session 13 logged out. Waiting for processes to exit.
Dec  3 16:05:54 np0005544708 systemd-logind[787]: Removed session 13.
Dec  3 16:05:59 np0005544708 systemd-logind[787]: New session 14 of user zuul.
Dec  3 16:05:59 np0005544708 systemd[1]: Started Session 14 of User zuul.
Dec  3 16:05:59 np0005544708 python3.9[68440]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec  3 16:06:00 np0005544708 python3.9[68592]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:06:02 np0005544708 python3.9[68744]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 16:06:03 np0005544708 python3.9[68896]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/QcnFnE07R2H02WXa+53W3W+nwsFsC4YoQpDZUgxEwlg4f2zQf8fQIG23b5h9N8ej11I+FwfST4eb14wdXsFBAm6rVbCzkwQOmaDc1DdRfSmSFzwYKgqnejjeunc7W9ASRY8ZFAX/dexoruuzsoDFSnT/YK2DiUDLCoWmwO4mZ946GvsVF6yCywprEQo/oFdVyYbYBvGnl2hb9O06ePH8wQRx2BT7GKvzyv0j8Dz3LjXOzrd+jB7UlvodWIaHPlQhq/S/ZDfA640mfL7TSk/VRKvnWyi4m3+Gbj0A92cO36Objq1V2W1DPen5Nzv5CbZRHNjBvVR9G0jGLdsP8sWtUhe2qfiLZlAx0Cn0ZIhzPbS2Ij3lgp1Otug1NK15JYpiz9z0JO+UgfdZ9ht6yAYnsMcQ4OaFvKqWmsOxrx76BJ8s3hQuBMrZL+YgtbDswJVFn9/ay22MQ+ntCLeQL6GPb6WQJGnnWYqSlUX3e8wBllkbHrFK1/iyfqWjrHwteK8=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINtkoZCmFpb3z8TzbldoOvjALaFBxUWmFrtA4oHE040r#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAzVjP1T+0nWOYuc0KdOyqtmhcGoQseIckbkxVi0stL4dfIoBsNFyujIS49nno21BKZJb6EV/fwil4CuPgbMlGg=#012 create=True mode=0644 path=/tmp/ansible.i42xset5 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:06:03 np0005544708 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  3 16:06:04 np0005544708 python3.9[69050]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.i42xset5' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:06:05 np0005544708 python3.9[69204]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.i42xset5 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:06:05 np0005544708 systemd[1]: session-14.scope: Deactivated successfully.
Dec  3 16:06:05 np0005544708 systemd[1]: session-14.scope: Consumed 4.103s CPU time.
Dec  3 16:06:05 np0005544708 systemd-logind[787]: Session 14 logged out. Waiting for processes to exit.
Dec  3 16:06:05 np0005544708 systemd-logind[787]: Removed session 14.
Dec  3 16:06:11 np0005544708 systemd-logind[787]: New session 15 of user zuul.
Dec  3 16:06:11 np0005544708 systemd[1]: Started Session 15 of User zuul.
Dec  3 16:06:12 np0005544708 python3.9[69382]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 16:06:14 np0005544708 python3.9[69538]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  3 16:06:15 np0005544708 python3.9[69692]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 16:06:16 np0005544708 python3.9[69845]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:06:17 np0005544708 python3.9[69998]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:06:17 np0005544708 python3.9[70152]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:06:18 np0005544708 python3.9[70307]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:06:19 np0005544708 systemd[1]: session-15.scope: Deactivated successfully.
Dec  3 16:06:19 np0005544708 systemd-logind[787]: Session 15 logged out. Waiting for processes to exit.
Dec  3 16:06:19 np0005544708 systemd[1]: session-15.scope: Consumed 5.360s CPU time.
Dec  3 16:06:19 np0005544708 systemd-logind[787]: Removed session 15.
Dec  3 16:06:24 np0005544708 systemd-logind[787]: New session 16 of user zuul.
Dec  3 16:06:24 np0005544708 systemd[1]: Started Session 16 of User zuul.
Dec  3 16:06:25 np0005544708 python3.9[70485]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 16:06:27 np0005544708 python3.9[70641]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 16:06:27 np0005544708 python3.9[70725]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  3 16:06:30 np0005544708 python3.9[70876]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:06:31 np0005544708 python3.9[71027]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  3 16:06:32 np0005544708 python3.9[71177]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:06:32 np0005544708 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 16:06:32 np0005544708 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 16:06:33 np0005544708 python3.9[71328]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:06:33 np0005544708 systemd[1]: session-16.scope: Deactivated successfully.
Dec  3 16:06:33 np0005544708 systemd[1]: session-16.scope: Consumed 6.375s CPU time.
Dec  3 16:06:33 np0005544708 systemd-logind[787]: Session 16 logged out. Waiting for processes to exit.
Dec  3 16:06:33 np0005544708 systemd-logind[787]: Removed session 16.
Dec  3 16:06:41 np0005544708 systemd-logind[787]: New session 17 of user zuul.
Dec  3 16:06:41 np0005544708 systemd[1]: Started Session 17 of User zuul.
Dec  3 16:06:47 np0005544708 python3[72095]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 16:06:48 np0005544708 python3[72190]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  3 16:06:50 np0005544708 python3[72217]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  3 16:06:50 np0005544708 python3[72243]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:06:50 np0005544708 kernel: loop: module loaded
Dec  3 16:06:50 np0005544708 kernel: loop3: detected capacity change from 0 to 41943040
Dec  3 16:06:50 np0005544708 python3[72278]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:06:51 np0005544708 lvm[72281]: PV /dev/loop3 not used.
Dec  3 16:06:51 np0005544708 lvm[72283]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:06:51 np0005544708 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Dec  3 16:06:51 np0005544708 lvm[72293]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:06:51 np0005544708 lvm[72293]: VG ceph_vg0 finished
Dec  3 16:06:51 np0005544708 lvm[72291]:  1 logical volume(s) in volume group "ceph_vg0" now active
Dec  3 16:06:51 np0005544708 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Dec  3 16:06:51 np0005544708 python3[72371]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 16:06:52 np0005544708 python3[72444]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764796011.4022021-36343-12362086634002/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:06:52 np0005544708 python3[72494]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:06:52 np0005544708 systemd[1]: Reloading.
Dec  3 16:06:53 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:06:53 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:06:53 np0005544708 systemd[1]: Starting Ceph OSD losetup...
Dec  3 16:06:53 np0005544708 bash[72533]: /dev/loop3: [64513]:4327948 (/var/lib/ceph-osd-0.img)
Dec  3 16:06:53 np0005544708 systemd[1]: Finished Ceph OSD losetup.
Dec  3 16:06:53 np0005544708 lvm[72534]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:06:53 np0005544708 lvm[72534]: VG ceph_vg0 finished
Dec  3 16:06:53 np0005544708 python3[72560]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  3 16:06:54 np0005544708 chronyd[58561]: Selected source 162.159.200.123 (pool.ntp.org)
Dec  3 16:06:55 np0005544708 python3[72587]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  3 16:06:55 np0005544708 python3[72613]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:06:55 np0005544708 kernel: loop4: detected capacity change from 0 to 41943040
Dec  3 16:06:55 np0005544708 python3[72645]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:06:55 np0005544708 lvm[72648]: PV /dev/loop4 not used.
Dec  3 16:06:56 np0005544708 lvm[72657]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:06:56 np0005544708 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Dec  3 16:06:56 np0005544708 lvm[72659]:  1 logical volume(s) in volume group "ceph_vg1" now active
Dec  3 16:06:56 np0005544708 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Dec  3 16:06:56 np0005544708 python3[72737]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 16:06:56 np0005544708 python3[72810]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764796016.2793815-36370-175935688238143/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:06:57 np0005544708 python3[72860]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:06:57 np0005544708 systemd[1]: Reloading.
Dec  3 16:06:57 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:06:57 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:06:57 np0005544708 systemd[1]: Starting Ceph OSD losetup...
Dec  3 16:06:57 np0005544708 bash[72900]: /dev/loop4: [64513]:4327955 (/var/lib/ceph-osd-1.img)
Dec  3 16:06:57 np0005544708 systemd[1]: Finished Ceph OSD losetup.
Dec  3 16:06:57 np0005544708 lvm[72901]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:06:57 np0005544708 lvm[72901]: VG ceph_vg1 finished
Dec  3 16:06:58 np0005544708 python3[72927]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  3 16:06:59 np0005544708 python3[72954]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  3 16:07:00 np0005544708 python3[72980]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G#012losetup /dev/loop5 /var/lib/ceph-osd-2.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:07:00 np0005544708 kernel: loop5: detected capacity change from 0 to 41943040
Dec  3 16:07:00 np0005544708 python3[73012]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5#012vgcreate ceph_vg2 /dev/loop5#012lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:07:00 np0005544708 lvm[73015]: PV /dev/loop5 not used.
Dec  3 16:07:00 np0005544708 lvm[73017]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:07:00 np0005544708 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Dec  3 16:07:00 np0005544708 lvm[73027]:  1 logical volume(s) in volume group "ceph_vg2" now active
Dec  3 16:07:00 np0005544708 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Dec  3 16:07:01 np0005544708 python3[73105]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 16:07:01 np0005544708 python3[73178]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764796021.059373-36401-200170879901882/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:07:02 np0005544708 python3[73228]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:07:02 np0005544708 systemd[1]: Reloading.
Dec  3 16:07:02 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:07:02 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:07:02 np0005544708 systemd[1]: Starting Ceph OSD losetup...
Dec  3 16:07:02 np0005544708 bash[73268]: /dev/loop5: [64513]:4327966 (/var/lib/ceph-osd-2.img)
Dec  3 16:07:02 np0005544708 systemd[1]: Finished Ceph OSD losetup.
Dec  3 16:07:02 np0005544708 lvm[73269]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:07:02 np0005544708 lvm[73269]: VG ceph_vg2 finished
Dec  3 16:07:04 np0005544708 python3[73293]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 16:07:06 np0005544708 python3[73386]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-tentacle'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  3 16:07:09 np0005544708 python3[73443]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  3 16:07:14 np0005544708 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  3 16:07:14 np0005544708 systemd[1]: Starting man-db-cache-update.service...
Dec  3 16:07:14 np0005544708 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  3 16:07:14 np0005544708 systemd[1]: Finished man-db-cache-update.service.
Dec  3 16:07:14 np0005544708 systemd[1]: run-r8c46d8a96fdb4872aec1feda2bd47b9c.service: Deactivated successfully.
Dec  3 16:07:14 np0005544708 python3[73562]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  3 16:07:15 np0005544708 python3[73590]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:07:15 np0005544708 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  3 16:07:16 np0005544708 python3[73627]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:07:16 np0005544708 python3[73653]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:07:17 np0005544708 python3[73731]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 16:07:17 np0005544708 python3[73804]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764796037.2364986-36549-6794082631691/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:07:18 np0005544708 python3[73906]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 16:07:19 np0005544708 python3[73979]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764796038.4235067-36567-102611996598372/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:07:19 np0005544708 python3[74029]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  3 16:07:19 np0005544708 python3[74057]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  3 16:07:20 np0005544708 python3[74085]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  3 16:07:20 np0005544708 python3[74113]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid c21de27e-a7fd-594b-8324-0697ba9aab3a --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:07:20 np0005544708 systemd[1]: Created slice User Slice of UID 42477.
Dec  3 16:07:20 np0005544708 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec  3 16:07:20 np0005544708 systemd-logind[787]: New session 18 of user ceph-admin.
Dec  3 16:07:21 np0005544708 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec  3 16:07:21 np0005544708 systemd[1]: Starting User Manager for UID 42477...
Dec  3 16:07:21 np0005544708 systemd[74121]: Queued start job for default target Main User Target.
Dec  3 16:07:21 np0005544708 systemd[74121]: Created slice User Application Slice.
Dec  3 16:07:21 np0005544708 systemd[74121]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  3 16:07:21 np0005544708 systemd[74121]: Started Daily Cleanup of User's Temporary Directories.
Dec  3 16:07:21 np0005544708 systemd[74121]: Reached target Paths.
Dec  3 16:07:21 np0005544708 systemd[74121]: Reached target Timers.
Dec  3 16:07:21 np0005544708 systemd[74121]: Starting D-Bus User Message Bus Socket...
Dec  3 16:07:21 np0005544708 systemd[74121]: Starting Create User's Volatile Files and Directories...
Dec  3 16:07:21 np0005544708 systemd[74121]: Finished Create User's Volatile Files and Directories.
Dec  3 16:07:21 np0005544708 systemd[74121]: Listening on D-Bus User Message Bus Socket.
Dec  3 16:07:21 np0005544708 systemd[74121]: Reached target Sockets.
Dec  3 16:07:21 np0005544708 systemd[74121]: Reached target Basic System.
Dec  3 16:07:21 np0005544708 systemd[74121]: Reached target Main User Target.
Dec  3 16:07:21 np0005544708 systemd[74121]: Startup finished in 148ms.
Dec  3 16:07:21 np0005544708 systemd[1]: Started User Manager for UID 42477.
Dec  3 16:07:21 np0005544708 systemd[1]: Started Session 18 of User ceph-admin.
Dec  3 16:07:21 np0005544708 systemd[1]: session-18.scope: Deactivated successfully.
Dec  3 16:07:21 np0005544708 systemd-logind[787]: Session 18 logged out. Waiting for processes to exit.
Dec  3 16:07:21 np0005544708 systemd-logind[787]: Removed session 18.
Dec  3 16:07:21 np0005544708 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  3 16:07:21 np0005544708 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  3 16:07:23 np0005544708 systemd[1]: var-lib-containers-storage-overlay-compat525026094-merged.mount: Deactivated successfully.
Dec  3 16:07:24 np0005544708 systemd[1]: var-lib-containers-storage-overlay-compat525026094-lower\x2dmapped.mount: Deactivated successfully.
Dec  3 16:07:31 np0005544708 systemd[1]: Stopping User Manager for UID 42477...
Dec  3 16:07:31 np0005544708 systemd[74121]: Activating special unit Exit the Session...
Dec  3 16:07:31 np0005544708 systemd[74121]: Stopped target Main User Target.
Dec  3 16:07:31 np0005544708 systemd[74121]: Stopped target Basic System.
Dec  3 16:07:31 np0005544708 systemd[74121]: Stopped target Paths.
Dec  3 16:07:31 np0005544708 systemd[74121]: Stopped target Sockets.
Dec  3 16:07:31 np0005544708 systemd[74121]: Stopped target Timers.
Dec  3 16:07:31 np0005544708 systemd[74121]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec  3 16:07:31 np0005544708 systemd[74121]: Stopped Daily Cleanup of User's Temporary Directories.
Dec  3 16:07:31 np0005544708 systemd[74121]: Closed D-Bus User Message Bus Socket.
Dec  3 16:07:31 np0005544708 systemd[74121]: Stopped Create User's Volatile Files and Directories.
Dec  3 16:07:31 np0005544708 systemd[74121]: Removed slice User Application Slice.
Dec  3 16:07:31 np0005544708 systemd[74121]: Reached target Shutdown.
Dec  3 16:07:31 np0005544708 systemd[74121]: Finished Exit the Session.
Dec  3 16:07:31 np0005544708 systemd[74121]: Reached target Exit the Session.
Dec  3 16:07:31 np0005544708 systemd[1]: user@42477.service: Deactivated successfully.
Dec  3 16:07:31 np0005544708 systemd[1]: Stopped User Manager for UID 42477.
Dec  3 16:07:31 np0005544708 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Dec  3 16:07:31 np0005544708 systemd[1]: run-user-42477.mount: Deactivated successfully.
Dec  3 16:07:31 np0005544708 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Dec  3 16:07:31 np0005544708 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Dec  3 16:07:31 np0005544708 systemd[1]: Removed slice User Slice of UID 42477.
Dec  3 16:07:52 np0005544708 podman[74215]: 2025-12-03 21:07:52.662207341 +0000 UTC m=+31.029170905 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:07:52 np0005544708 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  3 16:07:52 np0005544708 podman[74315]: 2025-12-03 21:07:52.725241966 +0000 UTC m=+0.041476640 container create cfa3bef3b1357c74a02f70d5f0ed213e57889efee2675bc5df8b105852e29105 (image=quay.io/ceph/ceph:v20, name=xenodochial_lewin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 16:07:52 np0005544708 systemd[1]: Created slice Virtual Machine and Container Slice.
Dec  3 16:07:52 np0005544708 systemd[1]: Started libpod-conmon-cfa3bef3b1357c74a02f70d5f0ed213e57889efee2675bc5df8b105852e29105.scope.
Dec  3 16:07:52 np0005544708 podman[74315]: 2025-12-03 21:07:52.704631464 +0000 UTC m=+0.020866158 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:07:52 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:07:52 np0005544708 podman[74315]: 2025-12-03 21:07:52.842929951 +0000 UTC m=+0.159164655 container init cfa3bef3b1357c74a02f70d5f0ed213e57889efee2675bc5df8b105852e29105 (image=quay.io/ceph/ceph:v20, name=xenodochial_lewin, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec  3 16:07:52 np0005544708 podman[74315]: 2025-12-03 21:07:52.855127846 +0000 UTC m=+0.171362520 container start cfa3bef3b1357c74a02f70d5f0ed213e57889efee2675bc5df8b105852e29105 (image=quay.io/ceph/ceph:v20, name=xenodochial_lewin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec  3 16:07:52 np0005544708 podman[74315]: 2025-12-03 21:07:52.858984189 +0000 UTC m=+0.175218893 container attach cfa3bef3b1357c74a02f70d5f0ed213e57889efee2675bc5df8b105852e29105 (image=quay.io/ceph/ceph:v20, name=xenodochial_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:07:52 np0005544708 xenodochial_lewin[74331]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Dec  3 16:07:52 np0005544708 systemd[1]: libpod-cfa3bef3b1357c74a02f70d5f0ed213e57889efee2675bc5df8b105852e29105.scope: Deactivated successfully.
Dec  3 16:07:52 np0005544708 podman[74315]: 2025-12-03 21:07:52.971356702 +0000 UTC m=+0.287591386 container died cfa3bef3b1357c74a02f70d5f0ed213e57889efee2675bc5df8b105852e29105 (image=quay.io/ceph/ceph:v20, name=xenodochial_lewin, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 16:07:52 np0005544708 systemd[1]: var-lib-containers-storage-overlay-6f4effdcdd993c630bc445662db3d2ed5919b08724f1a8ca24ff12ef9c1b4d48-merged.mount: Deactivated successfully.
Dec  3 16:07:53 np0005544708 podman[74315]: 2025-12-03 21:07:53.017489885 +0000 UTC m=+0.333724599 container remove cfa3bef3b1357c74a02f70d5f0ed213e57889efee2675bc5df8b105852e29105 (image=quay.io/ceph/ceph:v20, name=xenodochial_lewin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:07:53 np0005544708 systemd[1]: libpod-conmon-cfa3bef3b1357c74a02f70d5f0ed213e57889efee2675bc5df8b105852e29105.scope: Deactivated successfully.
Dec  3 16:07:53 np0005544708 podman[74350]: 2025-12-03 21:07:53.098067009 +0000 UTC m=+0.056926263 container create fa13357bc690ccbccb112fb2a7760709bc0a0fc015ed78b296798422824b06fc (image=quay.io/ceph/ceph:v20, name=magical_lamport, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:07:53 np0005544708 systemd[1]: Started libpod-conmon-fa13357bc690ccbccb112fb2a7760709bc0a0fc015ed78b296798422824b06fc.scope.
Dec  3 16:07:53 np0005544708 podman[74350]: 2025-12-03 21:07:53.067832541 +0000 UTC m=+0.026692245 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:07:53 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:07:53 np0005544708 podman[74350]: 2025-12-03 21:07:53.203408634 +0000 UTC m=+0.162267938 container init fa13357bc690ccbccb112fb2a7760709bc0a0fc015ed78b296798422824b06fc (image=quay.io/ceph/ceph:v20, name=magical_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:07:53 np0005544708 podman[74350]: 2025-12-03 21:07:53.210546005 +0000 UTC m=+0.169405249 container start fa13357bc690ccbccb112fb2a7760709bc0a0fc015ed78b296798422824b06fc (image=quay.io/ceph/ceph:v20, name=magical_lamport, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:07:53 np0005544708 magical_lamport[74367]: 167 167
Dec  3 16:07:53 np0005544708 podman[74350]: 2025-12-03 21:07:53.2148886 +0000 UTC m=+0.173747844 container attach fa13357bc690ccbccb112fb2a7760709bc0a0fc015ed78b296798422824b06fc (image=quay.io/ceph/ceph:v20, name=magical_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:07:53 np0005544708 systemd[1]: libpod-fa13357bc690ccbccb112fb2a7760709bc0a0fc015ed78b296798422824b06fc.scope: Deactivated successfully.
Dec  3 16:07:53 np0005544708 podman[74350]: 2025-12-03 21:07:53.216672008 +0000 UTC m=+0.175531302 container died fa13357bc690ccbccb112fb2a7760709bc0a0fc015ed78b296798422824b06fc (image=quay.io/ceph/ceph:v20, name=magical_lamport, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  3 16:07:53 np0005544708 podman[74350]: 2025-12-03 21:07:53.26685819 +0000 UTC m=+0.225717444 container remove fa13357bc690ccbccb112fb2a7760709bc0a0fc015ed78b296798422824b06fc (image=quay.io/ceph/ceph:v20, name=magical_lamport, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec  3 16:07:53 np0005544708 systemd[1]: libpod-conmon-fa13357bc690ccbccb112fb2a7760709bc0a0fc015ed78b296798422824b06fc.scope: Deactivated successfully.
Dec  3 16:07:53 np0005544708 podman[74386]: 2025-12-03 21:07:53.370779887 +0000 UTC m=+0.068474031 container create 01dc863d99e55dcf4c1a3bfd0cf11c9411dec70bd9d141c348567cf860218b39 (image=quay.io/ceph/ceph:v20, name=recursing_payne, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:07:53 np0005544708 systemd[1]: Started libpod-conmon-01dc863d99e55dcf4c1a3bfd0cf11c9411dec70bd9d141c348567cf860218b39.scope.
Dec  3 16:07:53 np0005544708 podman[74386]: 2025-12-03 21:07:53.341087123 +0000 UTC m=+0.038781317 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:07:53 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:07:53 np0005544708 podman[74386]: 2025-12-03 21:07:53.456885748 +0000 UTC m=+0.154579952 container init 01dc863d99e55dcf4c1a3bfd0cf11c9411dec70bd9d141c348567cf860218b39 (image=quay.io/ceph/ceph:v20, name=recursing_payne, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:07:53 np0005544708 podman[74386]: 2025-12-03 21:07:53.491743529 +0000 UTC m=+0.189437633 container start 01dc863d99e55dcf4c1a3bfd0cf11c9411dec70bd9d141c348567cf860218b39 (image=quay.io/ceph/ceph:v20, name=recursing_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:07:53 np0005544708 podman[74386]: 2025-12-03 21:07:53.49548206 +0000 UTC m=+0.193176174 container attach 01dc863d99e55dcf4c1a3bfd0cf11c9411dec70bd9d141c348567cf860218b39 (image=quay.io/ceph/ceph:v20, name=recursing_payne, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:07:53 np0005544708 recursing_payne[74403]: AQCppjBpMJnQHhAAxGJU1UHRttnvhIEom6jHMg==
Dec  3 16:07:53 np0005544708 systemd[1]: libpod-01dc863d99e55dcf4c1a3bfd0cf11c9411dec70bd9d141c348567cf860218b39.scope: Deactivated successfully.
Dec  3 16:07:53 np0005544708 podman[74386]: 2025-12-03 21:07:53.52093023 +0000 UTC m=+0.218624334 container died 01dc863d99e55dcf4c1a3bfd0cf11c9411dec70bd9d141c348567cf860218b39 (image=quay.io/ceph/ceph:v20, name=recursing_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:07:53 np0005544708 podman[74386]: 2025-12-03 21:07:53.561994677 +0000 UTC m=+0.259688781 container remove 01dc863d99e55dcf4c1a3bfd0cf11c9411dec70bd9d141c348567cf860218b39 (image=quay.io/ceph/ceph:v20, name=recursing_payne, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  3 16:07:53 np0005544708 systemd[1]: libpod-conmon-01dc863d99e55dcf4c1a3bfd0cf11c9411dec70bd9d141c348567cf860218b39.scope: Deactivated successfully.
Dec  3 16:07:53 np0005544708 podman[74422]: 2025-12-03 21:07:53.653984755 +0000 UTC m=+0.060437465 container create fb488507d3399c212fe8aa467a4a6b056db47d76762a6c0b3be6b38a00d62644 (image=quay.io/ceph/ceph:v20, name=festive_tu, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:07:53 np0005544708 systemd[1]: Started libpod-conmon-fb488507d3399c212fe8aa467a4a6b056db47d76762a6c0b3be6b38a00d62644.scope.
Dec  3 16:07:53 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:07:53 np0005544708 podman[74422]: 2025-12-03 21:07:53.63319864 +0000 UTC m=+0.039651430 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:07:53 np0005544708 podman[74422]: 2025-12-03 21:07:53.737172118 +0000 UTC m=+0.143624928 container init fb488507d3399c212fe8aa467a4a6b056db47d76762a6c0b3be6b38a00d62644 (image=quay.io/ceph/ceph:v20, name=festive_tu, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:07:53 np0005544708 podman[74422]: 2025-12-03 21:07:53.743816466 +0000 UTC m=+0.150269216 container start fb488507d3399c212fe8aa467a4a6b056db47d76762a6c0b3be6b38a00d62644 (image=quay.io/ceph/ceph:v20, name=festive_tu, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 16:07:53 np0005544708 podman[74422]: 2025-12-03 21:07:53.747892745 +0000 UTC m=+0.154345525 container attach fb488507d3399c212fe8aa467a4a6b056db47d76762a6c0b3be6b38a00d62644 (image=quay.io/ceph/ceph:v20, name=festive_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec  3 16:07:53 np0005544708 festive_tu[74438]: AQCppjBpMrH/LRAA51zxlKZtaR3lOo3HPtc7fg==
Dec  3 16:07:53 np0005544708 systemd[1]: libpod-fb488507d3399c212fe8aa467a4a6b056db47d76762a6c0b3be6b38a00d62644.scope: Deactivated successfully.
Dec  3 16:07:53 np0005544708 podman[74445]: 2025-12-03 21:07:53.838173448 +0000 UTC m=+0.043748211 container died fb488507d3399c212fe8aa467a4a6b056db47d76762a6c0b3be6b38a00d62644 (image=quay.io/ceph/ceph:v20, name=festive_tu, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:07:53 np0005544708 systemd[1]: var-lib-containers-storage-overlay-01e033bd4d9a27f01aa93a6be8b52359cae7bf9c9d7e9cb5b6e2ba45850209be-merged.mount: Deactivated successfully.
Dec  3 16:07:53 np0005544708 podman[74445]: 2025-12-03 21:07:53.875880085 +0000 UTC m=+0.081454858 container remove fb488507d3399c212fe8aa467a4a6b056db47d76762a6c0b3be6b38a00d62644 (image=quay.io/ceph/ceph:v20, name=festive_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  3 16:07:53 np0005544708 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  3 16:07:53 np0005544708 systemd[1]: libpod-conmon-fb488507d3399c212fe8aa467a4a6b056db47d76762a6c0b3be6b38a00d62644.scope: Deactivated successfully.
Dec  3 16:07:53 np0005544708 podman[74461]: 2025-12-03 21:07:53.951011003 +0000 UTC m=+0.045113497 container create b0038074b4239792dbd827a80e7b440869b6614d013c8cb899510d25ae307ce9 (image=quay.io/ceph/ceph:v20, name=cool_lichterman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:07:53 np0005544708 systemd[1]: Started libpod-conmon-b0038074b4239792dbd827a80e7b440869b6614d013c8cb899510d25ae307ce9.scope.
Dec  3 16:07:54 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:07:54 np0005544708 podman[74461]: 2025-12-03 21:07:53.931645375 +0000 UTC m=+0.025747879 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:07:54 np0005544708 podman[74461]: 2025-12-03 21:07:54.388078544 +0000 UTC m=+0.482181078 container init b0038074b4239792dbd827a80e7b440869b6614d013c8cb899510d25ae307ce9 (image=quay.io/ceph/ceph:v20, name=cool_lichterman, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  3 16:07:54 np0005544708 podman[74461]: 2025-12-03 21:07:54.394214728 +0000 UTC m=+0.488317222 container start b0038074b4239792dbd827a80e7b440869b6614d013c8cb899510d25ae307ce9 (image=quay.io/ceph/ceph:v20, name=cool_lichterman, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  3 16:07:54 np0005544708 cool_lichterman[74477]: AQCqpjBpRPTUGRAAGY4UVSICZn6CZFC4ph5YxQ==
Dec  3 16:07:54 np0005544708 systemd[1]: libpod-b0038074b4239792dbd827a80e7b440869b6614d013c8cb899510d25ae307ce9.scope: Deactivated successfully.
Dec  3 16:07:56 np0005544708 podman[74461]: 2025-12-03 21:07:56.733067612 +0000 UTC m=+2.827170256 container attach b0038074b4239792dbd827a80e7b440869b6614d013c8cb899510d25ae307ce9 (image=quay.io/ceph/ceph:v20, name=cool_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec  3 16:07:56 np0005544708 podman[74461]: 2025-12-03 21:07:56.734381807 +0000 UTC m=+2.828484361 container died b0038074b4239792dbd827a80e7b440869b6614d013c8cb899510d25ae307ce9 (image=quay.io/ceph/ceph:v20, name=cool_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  3 16:07:56 np0005544708 systemd[1]: var-lib-containers-storage-overlay-0ae07e9bb40ee33a26ff332833560dcaf2b247df09be2bfe7d50c65ca261152b-merged.mount: Deactivated successfully.
Dec  3 16:07:56 np0005544708 podman[74461]: 2025-12-03 21:07:56.807358167 +0000 UTC m=+2.901460691 container remove b0038074b4239792dbd827a80e7b440869b6614d013c8cb899510d25ae307ce9 (image=quay.io/ceph/ceph:v20, name=cool_lichterman, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:07:56 np0005544708 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  3 16:07:56 np0005544708 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  3 16:07:56 np0005544708 systemd[1]: libpod-conmon-b0038074b4239792dbd827a80e7b440869b6614d013c8cb899510d25ae307ce9.scope: Deactivated successfully.
Dec  3 16:07:56 np0005544708 podman[74495]: 2025-12-03 21:07:56.883622995 +0000 UTC m=+0.049829213 container create 10eeea3b0bff7a8e1f23b523b5335e035569e5debedc2aa5ec1d5a58c1f43e55 (image=quay.io/ceph/ceph:v20, name=sharp_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:07:56 np0005544708 systemd[1]: Started libpod-conmon-10eeea3b0bff7a8e1f23b523b5335e035569e5debedc2aa5ec1d5a58c1f43e55.scope.
Dec  3 16:07:56 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:07:56 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/684d75e7ff4caf0205f7b7bf0d4ea157a61d960ebaf18a309166960bf14b3710/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec  3 16:07:56 np0005544708 podman[74495]: 2025-12-03 21:07:56.860136658 +0000 UTC m=+0.026342876 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:07:56 np0005544708 podman[74495]: 2025-12-03 21:07:56.956807611 +0000 UTC m=+0.123013869 container init 10eeea3b0bff7a8e1f23b523b5335e035569e5debedc2aa5ec1d5a58c1f43e55 (image=quay.io/ceph/ceph:v20, name=sharp_bouman, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  3 16:07:56 np0005544708 podman[74495]: 2025-12-03 21:07:56.964372603 +0000 UTC m=+0.130578791 container start 10eeea3b0bff7a8e1f23b523b5335e035569e5debedc2aa5ec1d5a58c1f43e55 (image=quay.io/ceph/ceph:v20, name=sharp_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec  3 16:07:56 np0005544708 podman[74495]: 2025-12-03 21:07:56.968970916 +0000 UTC m=+0.135177184 container attach 10eeea3b0bff7a8e1f23b523b5335e035569e5debedc2aa5ec1d5a58c1f43e55 (image=quay.io/ceph/ceph:v20, name=sharp_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  3 16:07:57 np0005544708 sharp_bouman[74511]: /usr/bin/monmaptool: monmap file /tmp/monmap
Dec  3 16:07:57 np0005544708 sharp_bouman[74511]: setting min_mon_release = tentacle
Dec  3 16:07:57 np0005544708 sharp_bouman[74511]: /usr/bin/monmaptool: set fsid to c21de27e-a7fd-594b-8324-0697ba9aab3a
Dec  3 16:07:57 np0005544708 sharp_bouman[74511]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Dec  3 16:07:57 np0005544708 systemd[1]: libpod-10eeea3b0bff7a8e1f23b523b5335e035569e5debedc2aa5ec1d5a58c1f43e55.scope: Deactivated successfully.
Dec  3 16:07:57 np0005544708 podman[74495]: 2025-12-03 21:07:57.004736552 +0000 UTC m=+0.170942770 container died 10eeea3b0bff7a8e1f23b523b5335e035569e5debedc2aa5ec1d5a58c1f43e55 (image=quay.io/ceph/ceph:v20, name=sharp_bouman, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 16:07:57 np0005544708 podman[74495]: 2025-12-03 21:07:57.057649446 +0000 UTC m=+0.223855664 container remove 10eeea3b0bff7a8e1f23b523b5335e035569e5debedc2aa5ec1d5a58c1f43e55 (image=quay.io/ceph/ceph:v20, name=sharp_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:07:57 np0005544708 systemd[1]: libpod-conmon-10eeea3b0bff7a8e1f23b523b5335e035569e5debedc2aa5ec1d5a58c1f43e55.scope: Deactivated successfully.
Dec  3 16:07:57 np0005544708 podman[74532]: 2025-12-03 21:07:57.168131369 +0000 UTC m=+0.075303904 container create a3dbdf7d75406e2f404ab9f653a6b4d797b17d232374fedc67b0ed37273ff7d3 (image=quay.io/ceph/ceph:v20, name=goofy_varahamihira, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  3 16:07:57 np0005544708 systemd[1]: Started libpod-conmon-a3dbdf7d75406e2f404ab9f653a6b4d797b17d232374fedc67b0ed37273ff7d3.scope.
Dec  3 16:07:57 np0005544708 podman[74532]: 2025-12-03 21:07:57.145619966 +0000 UTC m=+0.052792521 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:07:57 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:07:57 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e842c6671a986ceed0e9b82a6663daa8c7a15b2c0791079f82602e3541dd5134/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:07:57 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e842c6671a986ceed0e9b82a6663daa8c7a15b2c0791079f82602e3541dd5134/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec  3 16:07:57 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e842c6671a986ceed0e9b82a6663daa8c7a15b2c0791079f82602e3541dd5134/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:07:57 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e842c6671a986ceed0e9b82a6663daa8c7a15b2c0791079f82602e3541dd5134/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  3 16:07:57 np0005544708 podman[74532]: 2025-12-03 21:07:57.265784878 +0000 UTC m=+0.172957463 container init a3dbdf7d75406e2f404ab9f653a6b4d797b17d232374fedc67b0ed37273ff7d3 (image=quay.io/ceph/ceph:v20, name=goofy_varahamihira, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  3 16:07:57 np0005544708 podman[74532]: 2025-12-03 21:07:57.281264262 +0000 UTC m=+0.188436797 container start a3dbdf7d75406e2f404ab9f653a6b4d797b17d232374fedc67b0ed37273ff7d3 (image=quay.io/ceph/ceph:v20, name=goofy_varahamihira, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec  3 16:07:57 np0005544708 podman[74532]: 2025-12-03 21:07:57.28641581 +0000 UTC m=+0.193588415 container attach a3dbdf7d75406e2f404ab9f653a6b4d797b17d232374fedc67b0ed37273ff7d3 (image=quay.io/ceph/ceph:v20, name=goofy_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:07:57 np0005544708 systemd[1]: libpod-a3dbdf7d75406e2f404ab9f653a6b4d797b17d232374fedc67b0ed37273ff7d3.scope: Deactivated successfully.
Dec  3 16:07:57 np0005544708 podman[74532]: 2025-12-03 21:07:57.388075676 +0000 UTC m=+0.295248271 container died a3dbdf7d75406e2f404ab9f653a6b4d797b17d232374fedc67b0ed37273ff7d3 (image=quay.io/ceph/ceph:v20, name=goofy_varahamihira, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:07:57 np0005544708 podman[74532]: 2025-12-03 21:07:57.435017641 +0000 UTC m=+0.342190176 container remove a3dbdf7d75406e2f404ab9f653a6b4d797b17d232374fedc67b0ed37273ff7d3 (image=quay.io/ceph/ceph:v20, name=goofy_varahamihira, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec  3 16:07:57 np0005544708 systemd[1]: libpod-conmon-a3dbdf7d75406e2f404ab9f653a6b4d797b17d232374fedc67b0ed37273ff7d3.scope: Deactivated successfully.
Dec  3 16:07:57 np0005544708 systemd[1]: Reloading.
Dec  3 16:07:57 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:07:57 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:07:57 np0005544708 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  3 16:07:57 np0005544708 systemd[1]: Reloading.
Dec  3 16:07:57 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:07:57 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:07:58 np0005544708 systemd[1]: Reached target All Ceph clusters and services.
Dec  3 16:07:58 np0005544708 systemd[1]: Reloading.
Dec  3 16:07:58 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:07:58 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:07:58 np0005544708 systemd[1]: Reached target Ceph cluster c21de27e-a7fd-594b-8324-0697ba9aab3a.
Dec  3 16:07:58 np0005544708 systemd[1]: Reloading.
Dec  3 16:07:58 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:07:58 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:07:58 np0005544708 systemd[1]: Reloading.
Dec  3 16:07:58 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:07:58 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:07:58 np0005544708 systemd[1]: Created slice Slice /system/ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a.
Dec  3 16:07:58 np0005544708 systemd[1]: Reached target System Time Set.
Dec  3 16:07:58 np0005544708 systemd[1]: Reached target System Time Synchronized.
Dec  3 16:07:58 np0005544708 systemd[1]: Starting Ceph mon.compute-0 for c21de27e-a7fd-594b-8324-0697ba9aab3a...
Dec  3 16:07:58 np0005544708 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  3 16:07:58 np0005544708 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  3 16:07:59 np0005544708 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  3 16:07:59 np0005544708 podman[74830]: 2025-12-03 21:07:59.156067815 +0000 UTC m=+0.060193450 container create 21a850393ffc3b228a0c2499d1dbfa6a53893dec4078b9c095d7b72c6199ac2e (image=quay.io/ceph/ceph:v20, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec  3 16:07:59 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab7826f93da93a73ed4e24bc845da7f990ade20e4d48691fc3ded33160f103e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:07:59 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab7826f93da93a73ed4e24bc845da7f990ade20e4d48691fc3ded33160f103e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:07:59 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab7826f93da93a73ed4e24bc845da7f990ade20e4d48691fc3ded33160f103e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:07:59 np0005544708 podman[74830]: 2025-12-03 21:07:59.126192166 +0000 UTC m=+0.030317871 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:07:59 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab7826f93da93a73ed4e24bc845da7f990ade20e4d48691fc3ded33160f103e6/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  3 16:07:59 np0005544708 podman[74830]: 2025-12-03 21:07:59.234982053 +0000 UTC m=+0.139107748 container init 21a850393ffc3b228a0c2499d1dbfa6a53893dec4078b9c095d7b72c6199ac2e (image=quay.io/ceph/ceph:v20, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:07:59 np0005544708 podman[74830]: 2025-12-03 21:07:59.246979095 +0000 UTC m=+0.151104720 container start 21a850393ffc3b228a0c2499d1dbfa6a53893dec4078b9c095d7b72c6199ac2e (image=quay.io/ceph/ceph:v20, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec  3 16:07:59 np0005544708 bash[74830]: 21a850393ffc3b228a0c2499d1dbfa6a53893dec4078b9c095d7b72c6199ac2e
Dec  3 16:07:59 np0005544708 systemd[1]: Started Ceph mon.compute-0 for c21de27e-a7fd-594b-8324-0697ba9aab3a.
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: set uid:gid to 167:167 (ceph:ceph)
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: pidfile_write: ignore empty --pid-file
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: load: jerasure load: lrc 
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: RocksDB version: 7.9.2
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: Git sha 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: Compile date 2025-10-30 15:42:43
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: DB SUMMARY
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: DB Session ID:  7J40V8I7ABGAZ2BL57XQ
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: CURRENT file:  CURRENT
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: IDENTITY file:  IDENTITY
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                         Options.error_if_exists: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                       Options.create_if_missing: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                         Options.paranoid_checks: 1
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                                     Options.env: 0x55a3e0832440
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                                      Options.fs: PosixFileSystem
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                                Options.info_log: 0x55a3e18613e0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                Options.max_file_opening_threads: 16
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                              Options.statistics: (nil)
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                               Options.use_fsync: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                       Options.max_log_file_size: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                         Options.allow_fallocate: 1
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                        Options.use_direct_reads: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:          Options.create_missing_column_families: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                              Options.db_log_dir: 
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                                 Options.wal_dir: 
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                   Options.advise_random_on_open: 1
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                    Options.write_buffer_manager: 0x55a3e17e0140
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                            Options.rate_limiter: (nil)
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                  Options.unordered_write: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                               Options.row_cache: None
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                              Options.wal_filter: None
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:             Options.allow_ingest_behind: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:             Options.two_write_queues: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:             Options.manual_wal_flush: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:             Options.wal_compression: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:             Options.atomic_flush: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                 Options.log_readahead_size: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:             Options.allow_data_in_errors: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:             Options.db_host_id: __hostname__
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:             Options.max_background_jobs: 2
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:             Options.max_background_compactions: -1
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:             Options.max_subcompactions: 1
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:             Options.max_total_wal_size: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                          Options.max_open_files: -1
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                          Options.bytes_per_sync: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:       Options.compaction_readahead_size: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                  Options.max_background_flushes: -1
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: Compression algorithms supported:
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: #011kZSTD supported: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: #011kXpressCompression supported: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: #011kBZip2Compression supported: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: #011kLZ4Compression supported: 1
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: #011kZlibCompression supported: 1
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: #011kSnappyCompression supported: 1
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:           Options.merge_operator: 
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:        Options.compaction_filter: None
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a3e17ec600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a3e17d18d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:        Options.write_buffer_size: 33554432
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:  Options.max_write_buffer_number: 2
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:          Options.compression: NoCompression
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:             Options.num_levels: 7
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d83d641b-0db7-44b5-9540-349f4c36f664
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796079329404, "job": 1, "event": "recovery_started", "wal_files": [4]}
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796079332085, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764796079, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d83d641b-0db7-44b5-9540-349f4c36f664", "db_session_id": "7J40V8I7ABGAZ2BL57XQ", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796079332369, "job": 1, "event": "recovery_finished"}
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55a3e17fee00
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: DB pointer 0x55a3e194a000
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a3e17d18d0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,0.95 KB,0.000181794%)#012#012** File Read Latency Histogram By Level [default] **
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid c21de27e-a7fd-594b-8324-0697ba9aab3a
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: mon.compute-0@-1(???) e0 preinit fsid c21de27e-a7fd-594b-8324-0697ba9aab3a
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: mon.compute-0@0(probing) e0 win_standalone_election
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Dec  3 16:07:59 np0005544708 podman[74851]: 2025-12-03 21:07:59.348741884 +0000 UTC m=+0.059352447 container create 443a0578fc77013b272b08bbc8d47f78ace0a0e0c715b75560c7398f20f73ae2 (image=quay.io/ceph/ceph:v20, name=affectionate_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: mon.compute-0@0(probing) e1 win_standalone_election
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: paxos.0).electionLogic(2) init, last seen epoch 2
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: log_channel(cluster) log [DBG] : monmap epoch 1
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: log_channel(cluster) log [DBG] : fsid c21de27e-a7fd-594b-8324-0697ba9aab3a
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: log_channel(cluster) log [DBG] : last_changed 2025-12-03T21:07:57.000116+0000
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: log_channel(cluster) log [DBG] : created 2025-12-03T21:07:57.000116+0000
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,ceph_version_when_created=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v20,cpu=AMD EPYC-Rome Processor,created_at=2025-12-03T21:07:57.329036Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025,kernel_version=5.14.0-645.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864316,os=Linux}
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout,17=tentacle ondisk layout}
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: mon.compute-0@0(leader).mds e1 new map
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: mon.compute-0@0(leader).mds e1 print_map#012e1#012btime 2025-12-03T21:07:59:373870+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: log_channel(cluster) log [DBG] : fsmap 
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: mkfs c21de27e-a7fd-594b-8324-0697ba9aab3a
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  3 16:07:59 np0005544708 systemd[1]: Started libpod-conmon-443a0578fc77013b272b08bbc8d47f78ace0a0e0c715b75560c7398f20f73ae2.scope.
Dec  3 16:07:59 np0005544708 podman[74851]: 2025-12-03 21:07:59.319677127 +0000 UTC m=+0.030287710 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:07:59 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:07:59 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a07e0ed30aed43b3ff86a38927958aa56606b7e5df55422323d3738356e3ed85/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:07:59 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a07e0ed30aed43b3ff86a38927958aa56606b7e5df55422323d3738356e3ed85/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:07:59 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a07e0ed30aed43b3ff86a38927958aa56606b7e5df55422323d3738356e3ed85/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  3 16:07:59 np0005544708 podman[74851]: 2025-12-03 21:07:59.474895035 +0000 UTC m=+0.185505608 container init 443a0578fc77013b272b08bbc8d47f78ace0a0e0c715b75560c7398f20f73ae2 (image=quay.io/ceph/ceph:v20, name=affectionate_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec  3 16:07:59 np0005544708 podman[74851]: 2025-12-03 21:07:59.486032923 +0000 UTC m=+0.196643456 container start 443a0578fc77013b272b08bbc8d47f78ace0a0e0c715b75560c7398f20f73ae2 (image=quay.io/ceph/ceph:v20, name=affectionate_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:07:59 np0005544708 podman[74851]: 2025-12-03 21:07:59.489506395 +0000 UTC m=+0.200116928 container attach 443a0578fc77013b272b08bbc8d47f78ace0a0e0c715b75560c7398f20f73ae2 (image=quay.io/ceph/ceph:v20, name=affectionate_goldberg, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Dec  3 16:07:59 np0005544708 ceph-mon[74850]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1609393615' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec  3 16:07:59 np0005544708 affectionate_goldberg[74905]:  cluster:
Dec  3 16:07:59 np0005544708 affectionate_goldberg[74905]:    id:     c21de27e-a7fd-594b-8324-0697ba9aab3a
Dec  3 16:07:59 np0005544708 affectionate_goldberg[74905]:    health: HEALTH_OK
Dec  3 16:07:59 np0005544708 affectionate_goldberg[74905]: 
Dec  3 16:07:59 np0005544708 affectionate_goldberg[74905]:  services:
Dec  3 16:07:59 np0005544708 affectionate_goldberg[74905]:    mon: 1 daemons, quorum compute-0 (age 0.319761s) [leader: compute-0]
Dec  3 16:07:59 np0005544708 affectionate_goldberg[74905]:    mgr: no daemons active
Dec  3 16:07:59 np0005544708 affectionate_goldberg[74905]:    osd: 0 osds: 0 up, 0 in
Dec  3 16:07:59 np0005544708 affectionate_goldberg[74905]: 
Dec  3 16:07:59 np0005544708 affectionate_goldberg[74905]:  data:
Dec  3 16:07:59 np0005544708 affectionate_goldberg[74905]:    pools:   0 pools, 0 pgs
Dec  3 16:07:59 np0005544708 affectionate_goldberg[74905]:    objects: 0 objects, 0 B
Dec  3 16:07:59 np0005544708 affectionate_goldberg[74905]:    usage:   0 B used, 0 B / 0 B avail
Dec  3 16:07:59 np0005544708 affectionate_goldberg[74905]:    pgs:     
Dec  3 16:07:59 np0005544708 affectionate_goldberg[74905]: 
Dec  3 16:07:59 np0005544708 systemd[1]: libpod-443a0578fc77013b272b08bbc8d47f78ace0a0e0c715b75560c7398f20f73ae2.scope: Deactivated successfully.
Dec  3 16:07:59 np0005544708 podman[74851]: 2025-12-03 21:07:59.707510722 +0000 UTC m=+0.418121345 container died 443a0578fc77013b272b08bbc8d47f78ace0a0e0c715b75560c7398f20f73ae2 (image=quay.io/ceph/ceph:v20, name=affectionate_goldberg, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:07:59 np0005544708 podman[74851]: 2025-12-03 21:07:59.763801466 +0000 UTC m=+0.474412019 container remove 443a0578fc77013b272b08bbc8d47f78ace0a0e0c715b75560c7398f20f73ae2 (image=quay.io/ceph/ceph:v20, name=affectionate_goldberg, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:07:59 np0005544708 systemd[1]: libpod-conmon-443a0578fc77013b272b08bbc8d47f78ace0a0e0c715b75560c7398f20f73ae2.scope: Deactivated successfully.
Dec  3 16:07:59 np0005544708 podman[74945]: 2025-12-03 21:07:59.843710362 +0000 UTC m=+0.055920926 container create cff5dfb52fb6f6e4ae48eb94fe0f9ea460e58eb524c4126b7780c52e336fd396 (image=quay.io/ceph/ceph:v20, name=jovial_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 16:07:59 np0005544708 systemd[1]: Started libpod-conmon-cff5dfb52fb6f6e4ae48eb94fe0f9ea460e58eb524c4126b7780c52e336fd396.scope.
Dec  3 16:07:59 np0005544708 podman[74945]: 2025-12-03 21:07:59.814012388 +0000 UTC m=+0.026222992 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:07:59 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:07:59 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a8bcaaac490cf82902f677024e14ca4e5bad549716722ccae9aaae4130eed26/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:07:59 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a8bcaaac490cf82902f677024e14ca4e5bad549716722ccae9aaae4130eed26/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:07:59 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a8bcaaac490cf82902f677024e14ca4e5bad549716722ccae9aaae4130eed26/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:07:59 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a8bcaaac490cf82902f677024e14ca4e5bad549716722ccae9aaae4130eed26/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  3 16:07:59 np0005544708 podman[74945]: 2025-12-03 21:07:59.93381528 +0000 UTC m=+0.146025884 container init cff5dfb52fb6f6e4ae48eb94fe0f9ea460e58eb524c4126b7780c52e336fd396 (image=quay.io/ceph/ceph:v20, name=jovial_cray, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:07:59 np0005544708 podman[74945]: 2025-12-03 21:07:59.941720801 +0000 UTC m=+0.153931345 container start cff5dfb52fb6f6e4ae48eb94fe0f9ea460e58eb524c4126b7780c52e336fd396 (image=quay.io/ceph/ceph:v20, name=jovial_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec  3 16:07:59 np0005544708 podman[74945]: 2025-12-03 21:07:59.945945203 +0000 UTC m=+0.158155817 container attach cff5dfb52fb6f6e4ae48eb94fe0f9ea460e58eb524c4126b7780c52e336fd396 (image=quay.io/ceph/ceph:v20, name=jovial_cray, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:08:00 np0005544708 ceph-mon[74850]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec  3 16:08:00 np0005544708 ceph-mon[74850]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1150063760' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec  3 16:08:00 np0005544708 ceph-mon[74850]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1150063760' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec  3 16:08:00 np0005544708 jovial_cray[74962]: 
Dec  3 16:08:00 np0005544708 jovial_cray[74962]: [global]
Dec  3 16:08:00 np0005544708 jovial_cray[74962]: #011fsid = c21de27e-a7fd-594b-8324-0697ba9aab3a
Dec  3 16:08:00 np0005544708 jovial_cray[74962]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec  3 16:08:00 np0005544708 jovial_cray[74962]: #011osd_crush_chooseleaf_type = 0
Dec  3 16:08:00 np0005544708 systemd[1]: libpod-cff5dfb52fb6f6e4ae48eb94fe0f9ea460e58eb524c4126b7780c52e336fd396.scope: Deactivated successfully.
Dec  3 16:08:00 np0005544708 podman[74945]: 2025-12-03 21:08:00.183881513 +0000 UTC m=+0.396092037 container died cff5dfb52fb6f6e4ae48eb94fe0f9ea460e58eb524c4126b7780c52e336fd396 (image=quay.io/ceph/ceph:v20, name=jovial_cray, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec  3 16:08:00 np0005544708 systemd[1]: var-lib-containers-storage-overlay-5a8bcaaac490cf82902f677024e14ca4e5bad549716722ccae9aaae4130eed26-merged.mount: Deactivated successfully.
Dec  3 16:08:00 np0005544708 podman[74945]: 2025-12-03 21:08:00.231377609 +0000 UTC m=+0.443588133 container remove cff5dfb52fb6f6e4ae48eb94fe0f9ea460e58eb524c4126b7780c52e336fd396 (image=quay.io/ceph/ceph:v20, name=jovial_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 16:08:00 np0005544708 systemd[1]: libpod-conmon-cff5dfb52fb6f6e4ae48eb94fe0f9ea460e58eb524c4126b7780c52e336fd396.scope: Deactivated successfully.
Dec  3 16:08:00 np0005544708 podman[75000]: 2025-12-03 21:08:00.326671888 +0000 UTC m=+0.065684407 container create e1ac53277320eccead34320e1185abc52d95db8738d29e9a70590b0a79ade31a (image=quay.io/ceph/ceph:v20, name=peaceful_bassi, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 16:08:00 np0005544708 systemd[1]: Started libpod-conmon-e1ac53277320eccead34320e1185abc52d95db8738d29e9a70590b0a79ade31a.scope.
Dec  3 16:08:00 np0005544708 podman[75000]: 2025-12-03 21:08:00.298856409 +0000 UTC m=+0.037868998 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:08:00 np0005544708 ceph-mon[74850]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  3 16:08:00 np0005544708 ceph-mon[74850]: from='client.? 192.168.122.100:0/1150063760' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec  3 16:08:00 np0005544708 ceph-mon[74850]: from='client.? 192.168.122.100:0/1150063760' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec  3 16:08:00 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:00 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/894bdc412a29c6427fbcfac0856f1df5e1e8e096d47f27d60b2c0d6ef8766c44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:00 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/894bdc412a29c6427fbcfac0856f1df5e1e8e096d47f27d60b2c0d6ef8766c44/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:00 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/894bdc412a29c6427fbcfac0856f1df5e1e8e096d47f27d60b2c0d6ef8766c44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:00 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/894bdc412a29c6427fbcfac0856f1df5e1e8e096d47f27d60b2c0d6ef8766c44/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:00 np0005544708 podman[75000]: 2025-12-03 21:08:00.447175161 +0000 UTC m=+0.186187680 container init e1ac53277320eccead34320e1185abc52d95db8738d29e9a70590b0a79ade31a (image=quay.io/ceph/ceph:v20, name=peaceful_bassi, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  3 16:08:00 np0005544708 podman[75000]: 2025-12-03 21:08:00.454310758 +0000 UTC m=+0.193323247 container start e1ac53277320eccead34320e1185abc52d95db8738d29e9a70590b0a79ade31a (image=quay.io/ceph/ceph:v20, name=peaceful_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  3 16:08:00 np0005544708 podman[75000]: 2025-12-03 21:08:00.458488551 +0000 UTC m=+0.197501080 container attach e1ac53277320eccead34320e1185abc52d95db8738d29e9a70590b0a79ade31a (image=quay.io/ceph/ceph:v20, name=peaceful_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec  3 16:08:00 np0005544708 ceph-mon[74850]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:08:00 np0005544708 ceph-mon[74850]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3461721600' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:08:00 np0005544708 systemd[1]: libpod-e1ac53277320eccead34320e1185abc52d95db8738d29e9a70590b0a79ade31a.scope: Deactivated successfully.
Dec  3 16:08:00 np0005544708 podman[75000]: 2025-12-03 21:08:00.705133647 +0000 UTC m=+0.444146156 container died e1ac53277320eccead34320e1185abc52d95db8738d29e9a70590b0a79ade31a (image=quay.io/ceph/ceph:v20, name=peaceful_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec  3 16:08:00 np0005544708 systemd[1]: var-lib-containers-storage-overlay-894bdc412a29c6427fbcfac0856f1df5e1e8e096d47f27d60b2c0d6ef8766c44-merged.mount: Deactivated successfully.
Dec  3 16:08:00 np0005544708 podman[75000]: 2025-12-03 21:08:00.761258556 +0000 UTC m=+0.500271075 container remove e1ac53277320eccead34320e1185abc52d95db8738d29e9a70590b0a79ade31a (image=quay.io/ceph/ceph:v20, name=peaceful_bassi, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec  3 16:08:00 np0005544708 systemd[1]: libpod-conmon-e1ac53277320eccead34320e1185abc52d95db8738d29e9a70590b0a79ade31a.scope: Deactivated successfully.
Dec  3 16:08:00 np0005544708 systemd[1]: Stopping Ceph mon.compute-0 for c21de27e-a7fd-594b-8324-0697ba9aab3a...
Dec  3 16:08:01 np0005544708 ceph-mon[74850]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec  3 16:08:01 np0005544708 ceph-mon[74850]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec  3 16:08:01 np0005544708 ceph-mon[74850]: mon.compute-0@0(leader) e1 shutdown
Dec  3 16:08:01 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mon-compute-0[74846]: 2025-12-03T21:08:01.046+0000 7f3e499eb640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec  3 16:08:01 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mon-compute-0[74846]: 2025-12-03T21:08:01.046+0000 7f3e499eb640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec  3 16:08:01 np0005544708 ceph-mon[74850]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec  3 16:08:01 np0005544708 ceph-mon[74850]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec  3 16:08:01 np0005544708 podman[75082]: 2025-12-03 21:08:01.077086565 +0000 UTC m=+0.083223481 container died 21a850393ffc3b228a0c2499d1dbfa6a53893dec4078b9c095d7b72c6199ac2e (image=quay.io/ceph/ceph:v20, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mon-compute-0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:08:01 np0005544708 systemd[1]: var-lib-containers-storage-overlay-ab7826f93da93a73ed4e24bc845da7f990ade20e4d48691fc3ded33160f103e6-merged.mount: Deactivated successfully.
Dec  3 16:08:01 np0005544708 podman[75082]: 2025-12-03 21:08:01.113866485 +0000 UTC m=+0.120003431 container remove 21a850393ffc3b228a0c2499d1dbfa6a53893dec4078b9c095d7b72c6199ac2e (image=quay.io/ceph/ceph:v20, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mon-compute-0, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:08:01 np0005544708 bash[75082]: ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mon-compute-0
Dec  3 16:08:01 np0005544708 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  3 16:08:01 np0005544708 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  3 16:08:01 np0005544708 systemd[1]: ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a@mon.compute-0.service: Deactivated successfully.
Dec  3 16:08:01 np0005544708 systemd[1]: Stopped Ceph mon.compute-0 for c21de27e-a7fd-594b-8324-0697ba9aab3a.
Dec  3 16:08:01 np0005544708 systemd[1]: ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a@mon.compute-0.service: Consumed 1.127s CPU time.
Dec  3 16:08:01 np0005544708 systemd[1]: Starting Ceph mon.compute-0 for c21de27e-a7fd-594b-8324-0697ba9aab3a...
Dec  3 16:08:01 np0005544708 podman[75184]: 2025-12-03 21:08:01.540826085 +0000 UTC m=+0.054105040 container create 5be1cf87f4450717250955c63021ef41dba8c1099adb634f3e1f6a0b130fa916 (image=quay.io/ceph/ceph:v20, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mon-compute-0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:08:01 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e130db53cb4ce5b03dc6db20b7b7e14193120feb03a76fb8dd45110fe5503780/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:01 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e130db53cb4ce5b03dc6db20b7b7e14193120feb03a76fb8dd45110fe5503780/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:01 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e130db53cb4ce5b03dc6db20b7b7e14193120feb03a76fb8dd45110fe5503780/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:01 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e130db53cb4ce5b03dc6db20b7b7e14193120feb03a76fb8dd45110fe5503780/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:01 np0005544708 podman[75184]: 2025-12-03 21:08:01.509561391 +0000 UTC m=+0.022840326 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:08:01 np0005544708 podman[75184]: 2025-12-03 21:08:01.615530854 +0000 UTC m=+0.128809819 container init 5be1cf87f4450717250955c63021ef41dba8c1099adb634f3e1f6a0b130fa916 (image=quay.io/ceph/ceph:v20, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mon-compute-0, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Dec  3 16:08:01 np0005544708 podman[75184]: 2025-12-03 21:08:01.633655204 +0000 UTC m=+0.146934129 container start 5be1cf87f4450717250955c63021ef41dba8c1099adb634f3e1f6a0b130fa916 (image=quay.io/ceph/ceph:v20, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec  3 16:08:01 np0005544708 bash[75184]: 5be1cf87f4450717250955c63021ef41dba8c1099adb634f3e1f6a0b130fa916
Dec  3 16:08:01 np0005544708 systemd[1]: Started Ceph mon.compute-0 for c21de27e-a7fd-594b-8324-0697ba9aab3a.
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: set uid:gid to 167:167 (ceph:ceph)
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: pidfile_write: ignore empty --pid-file
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: load: jerasure load: lrc 
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: RocksDB version: 7.9.2
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: Git sha 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: Compile date 2025-10-30 15:42:43
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: DB SUMMARY
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: DB Session ID:  YRQHTOJ9E78VAMDNI6U1
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: CURRENT file:  CURRENT
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: IDENTITY file:  IDENTITY
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 60239 ; 
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                         Options.error_if_exists: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                       Options.create_if_missing: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                         Options.paranoid_checks: 1
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                                     Options.env: 0x56170b774440
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                                      Options.fs: PosixFileSystem
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                                Options.info_log: 0x56170d6a7e80
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                Options.max_file_opening_threads: 16
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                              Options.statistics: (nil)
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                               Options.use_fsync: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                       Options.max_log_file_size: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                         Options.allow_fallocate: 1
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                        Options.use_direct_reads: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:          Options.create_missing_column_families: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                              Options.db_log_dir: 
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                                 Options.wal_dir: 
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                   Options.advise_random_on_open: 1
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                    Options.write_buffer_manager: 0x56170d6f2140
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                            Options.rate_limiter: (nil)
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                  Options.unordered_write: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                               Options.row_cache: None
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                              Options.wal_filter: None
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:             Options.allow_ingest_behind: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:             Options.two_write_queues: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:             Options.manual_wal_flush: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:             Options.wal_compression: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:             Options.atomic_flush: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                 Options.log_readahead_size: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:             Options.allow_data_in_errors: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:             Options.db_host_id: __hostname__
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:             Options.max_background_jobs: 2
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:             Options.max_background_compactions: -1
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:             Options.max_subcompactions: 1
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:             Options.max_total_wal_size: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                          Options.max_open_files: -1
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                          Options.bytes_per_sync: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:       Options.compaction_readahead_size: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                  Options.max_background_flushes: -1
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: Compression algorithms supported:
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: #011kZSTD supported: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: #011kXpressCompression supported: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: #011kBZip2Compression supported: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: #011kLZ4Compression supported: 1
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: #011kZlibCompression supported: 1
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: #011kSnappyCompression supported: 1
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:           Options.merge_operator: 
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:        Options.compaction_filter: None
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56170d6fea00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56170d6e38d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:        Options.write_buffer_size: 33554432
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:  Options.max_write_buffer_number: 2
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:          Options.compression: NoCompression
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:             Options.num_levels: 7
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d83d641b-0db7-44b5-9540-349f4c36f664
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796081688909, "job": 1, "event": "recovery_started", "wal_files": [9]}
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796081694786, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 59960, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 143, "table_properties": {"data_size": 58438, "index_size": 164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3403, "raw_average_key_size": 30, "raw_value_size": 55790, "raw_average_value_size": 507, "num_data_blocks": 9, "num_entries": 110, "num_filter_entries": 110, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764796081, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d83d641b-0db7-44b5-9540-349f4c36f664", "db_session_id": "YRQHTOJ9E78VAMDNI6U1", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796081694915, "job": 1, "event": "recovery_finished"}
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56170d710e00
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: DB pointer 0x56170d85a000
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   60.45 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     10.4      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0   60.45 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     10.4      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     10.4      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.4      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 3.15 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 3.15 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56170d6e38d0#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid c21de27e-a7fd-594b-8324-0697ba9aab3a
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: mon.compute-0@-1(???) e1 preinit fsid c21de27e-a7fd-594b-8324-0697ba9aab3a
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: mon.compute-0@-1(???).mds e1 new map
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: mon.compute-0@-1(???).mds e1 print_map#012e1#012btime 2025-12-03T21:07:59:373870+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(probing) e1 win_standalone_election
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : monmap epoch 1
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : fsid c21de27e-a7fd-594b-8324-0697ba9aab3a
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : last_changed 2025-12-03T21:07:57.000116+0000
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : created 2025-12-03T21:07:57.000116+0000
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : fsmap 
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec  3 16:08:01 np0005544708 podman[75205]: 2025-12-03 21:08:01.748780153 +0000 UTC m=+0.068399014 container create aacd22b91a099592e8cc13926950d97b036e870949e234f6daa4018351940b88 (image=quay.io/ceph/ceph:v20, name=infallible_khayyam, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:08:01 np0005544708 ceph-mon[75204]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  3 16:08:01 np0005544708 systemd[1]: Started libpod-conmon-aacd22b91a099592e8cc13926950d97b036e870949e234f6daa4018351940b88.scope.
Dec  3 16:08:01 np0005544708 podman[75205]: 2025-12-03 21:08:01.723810995 +0000 UTC m=+0.043429936 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:08:01 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:01 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583dae49ac19e390d41d20e6091bd1ca13dcaaae1a0443052ea6247447b623c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:01 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583dae49ac19e390d41d20e6091bd1ca13dcaaae1a0443052ea6247447b623c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:01 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583dae49ac19e390d41d20e6091bd1ca13dcaaae1a0443052ea6247447b623c6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:01 np0005544708 podman[75205]: 2025-12-03 21:08:01.865795389 +0000 UTC m=+0.185414340 container init aacd22b91a099592e8cc13926950d97b036e870949e234f6daa4018351940b88 (image=quay.io/ceph/ceph:v20, name=infallible_khayyam, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  3 16:08:01 np0005544708 podman[75205]: 2025-12-03 21:08:01.878122875 +0000 UTC m=+0.197741766 container start aacd22b91a099592e8cc13926950d97b036e870949e234f6daa4018351940b88 (image=quay.io/ceph/ceph:v20, name=infallible_khayyam, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:08:01 np0005544708 podman[75205]: 2025-12-03 21:08:01.882823122 +0000 UTC m=+0.202442013 container attach aacd22b91a099592e8cc13926950d97b036e870949e234f6daa4018351940b88 (image=quay.io/ceph/ceph:v20, name=infallible_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec  3 16:08:02 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Dec  3 16:08:02 np0005544708 systemd[1]: libpod-aacd22b91a099592e8cc13926950d97b036e870949e234f6daa4018351940b88.scope: Deactivated successfully.
Dec  3 16:08:02 np0005544708 podman[75287]: 2025-12-03 21:08:02.218922031 +0000 UTC m=+0.040149214 container died aacd22b91a099592e8cc13926950d97b036e870949e234f6daa4018351940b88 (image=quay.io/ceph/ceph:v20, name=infallible_khayyam, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:08:02 np0005544708 systemd[1]: var-lib-containers-storage-overlay-583dae49ac19e390d41d20e6091bd1ca13dcaaae1a0443052ea6247447b623c6-merged.mount: Deactivated successfully.
Dec  3 16:08:02 np0005544708 podman[75287]: 2025-12-03 21:08:02.268648383 +0000 UTC m=+0.089875516 container remove aacd22b91a099592e8cc13926950d97b036e870949e234f6daa4018351940b88 (image=quay.io/ceph/ceph:v20, name=infallible_khayyam, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec  3 16:08:02 np0005544708 systemd[1]: libpod-conmon-aacd22b91a099592e8cc13926950d97b036e870949e234f6daa4018351940b88.scope: Deactivated successfully.
Dec  3 16:08:02 np0005544708 podman[75302]: 2025-12-03 21:08:02.38568804 +0000 UTC m=+0.072937606 container create 8bdbfc7f531e0d29fe5fb445987d6cd3ba91162bbd0acc9b5b6a0cad4fa76232 (image=quay.io/ceph/ceph:v20, name=blissful_austin, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 16:08:02 np0005544708 systemd[1]: Started libpod-conmon-8bdbfc7f531e0d29fe5fb445987d6cd3ba91162bbd0acc9b5b6a0cad4fa76232.scope.
Dec  3 16:08:02 np0005544708 podman[75302]: 2025-12-03 21:08:02.353297098 +0000 UTC m=+0.040546714 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:08:02 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:02 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db6192df3ca52717e3c7f28e5e3492884e9412372b30d119b45894d8285d9772/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:02 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db6192df3ca52717e3c7f28e5e3492884e9412372b30d119b45894d8285d9772/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:02 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db6192df3ca52717e3c7f28e5e3492884e9412372b30d119b45894d8285d9772/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:02 np0005544708 podman[75302]: 2025-12-03 21:08:02.481282356 +0000 UTC m=+0.168531982 container init 8bdbfc7f531e0d29fe5fb445987d6cd3ba91162bbd0acc9b5b6a0cad4fa76232 (image=quay.io/ceph/ceph:v20, name=blissful_austin, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec  3 16:08:02 np0005544708 podman[75302]: 2025-12-03 21:08:02.491193772 +0000 UTC m=+0.178443338 container start 8bdbfc7f531e0d29fe5fb445987d6cd3ba91162bbd0acc9b5b6a0cad4fa76232 (image=quay.io/ceph/ceph:v20, name=blissful_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 16:08:02 np0005544708 podman[75302]: 2025-12-03 21:08:02.496488003 +0000 UTC m=+0.183737569 container attach 8bdbfc7f531e0d29fe5fb445987d6cd3ba91162bbd0acc9b5b6a0cad4fa76232 (image=quay.io/ceph/ceph:v20, name=blissful_austin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 16:08:02 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Dec  3 16:08:02 np0005544708 systemd[1]: libpod-8bdbfc7f531e0d29fe5fb445987d6cd3ba91162bbd0acc9b5b6a0cad4fa76232.scope: Deactivated successfully.
Dec  3 16:08:02 np0005544708 podman[75302]: 2025-12-03 21:08:02.750246065 +0000 UTC m=+0.437495601 container died 8bdbfc7f531e0d29fe5fb445987d6cd3ba91162bbd0acc9b5b6a0cad4fa76232 (image=quay.io/ceph/ceph:v20, name=blissful_austin, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:08:02 np0005544708 systemd[1]: var-lib-containers-storage-overlay-db6192df3ca52717e3c7f28e5e3492884e9412372b30d119b45894d8285d9772-merged.mount: Deactivated successfully.
Dec  3 16:08:02 np0005544708 podman[75302]: 2025-12-03 21:08:02.790483111 +0000 UTC m=+0.477732657 container remove 8bdbfc7f531e0d29fe5fb445987d6cd3ba91162bbd0acc9b5b6a0cad4fa76232 (image=quay.io/ceph/ceph:v20, name=blissful_austin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:08:02 np0005544708 systemd[1]: libpod-conmon-8bdbfc7f531e0d29fe5fb445987d6cd3ba91162bbd0acc9b5b6a0cad4fa76232.scope: Deactivated successfully.
Dec  3 16:08:02 np0005544708 systemd[1]: Reloading.
Dec  3 16:08:02 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:08:02 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:08:03 np0005544708 systemd[1]: Reloading.
Dec  3 16:08:03 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:08:03 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:08:03 np0005544708 systemd[1]: Starting Ceph mgr.compute-0.jxauqt for c21de27e-a7fd-594b-8324-0697ba9aab3a...
Dec  3 16:08:03 np0005544708 podman[75481]: 2025-12-03 21:08:03.655194037 +0000 UTC m=+0.058078139 container create 3ad5fa1a42ade6349b66920b152c5e56f1754f06ff5829bb4056fc775998f6a5 (image=quay.io/ceph/ceph:v20, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mgr-compute-0-jxauqt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec  3 16:08:03 np0005544708 podman[75481]: 2025-12-03 21:08:03.627622525 +0000 UTC m=+0.030506717 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:08:03 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6ad6dc267861ffde8868c0189956fb90082124991312250387d49919eba603a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:03 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6ad6dc267861ffde8868c0189956fb90082124991312250387d49919eba603a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:03 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6ad6dc267861ffde8868c0189956fb90082124991312250387d49919eba603a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:03 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6ad6dc267861ffde8868c0189956fb90082124991312250387d49919eba603a/merged/var/lib/ceph/mgr/ceph-compute-0.jxauqt supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:03 np0005544708 podman[75481]: 2025-12-03 21:08:03.74131914 +0000 UTC m=+0.144203312 container init 3ad5fa1a42ade6349b66920b152c5e56f1754f06ff5829bb4056fc775998f6a5 (image=quay.io/ceph/ceph:v20, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mgr-compute-0-jxauqt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:08:03 np0005544708 podman[75481]: 2025-12-03 21:08:03.750474996 +0000 UTC m=+0.153359138 container start 3ad5fa1a42ade6349b66920b152c5e56f1754f06ff5829bb4056fc775998f6a5 (image=quay.io/ceph/ceph:v20, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mgr-compute-0-jxauqt, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:08:03 np0005544708 bash[75481]: 3ad5fa1a42ade6349b66920b152c5e56f1754f06ff5829bb4056fc775998f6a5
Dec  3 16:08:03 np0005544708 systemd[1]: Started Ceph mgr.compute-0.jxauqt for c21de27e-a7fd-594b-8324-0697ba9aab3a.
Dec  3 16:08:03 np0005544708 ceph-mgr[75500]: set uid:gid to 167:167 (ceph:ceph)
Dec  3 16:08:03 np0005544708 ceph-mgr[75500]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Dec  3 16:08:03 np0005544708 ceph-mgr[75500]: pidfile_write: ignore empty --pid-file
Dec  3 16:08:03 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'alerts'
Dec  3 16:08:03 np0005544708 podman[75501]: 2025-12-03 21:08:03.890065992 +0000 UTC m=+0.085410385 container create b00c9920f25eca3171efe402eaf78e998e56a4de574b1ff919706af7f893e4bd (image=quay.io/ceph/ceph:v20, name=frosty_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 16:08:03 np0005544708 systemd[1]: Started libpod-conmon-b00c9920f25eca3171efe402eaf78e998e56a4de574b1ff919706af7f893e4bd.scope.
Dec  3 16:08:03 np0005544708 podman[75501]: 2025-12-03 21:08:03.862051408 +0000 UTC m=+0.057395881 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:08:03 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'balancer'
Dec  3 16:08:03 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:03 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ca44ce071279ffe8e8f3003bb58f867241c249356a01a566add5026a2cf113c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:03 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ca44ce071279ffe8e8f3003bb58f867241c249356a01a566add5026a2cf113c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:03 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ca44ce071279ffe8e8f3003bb58f867241c249356a01a566add5026a2cf113c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:04 np0005544708 podman[75501]: 2025-12-03 21:08:04.007824657 +0000 UTC m=+0.203169130 container init b00c9920f25eca3171efe402eaf78e998e56a4de574b1ff919706af7f893e4bd (image=quay.io/ceph/ceph:v20, name=frosty_kirch, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:08:04 np0005544708 podman[75501]: 2025-12-03 21:08:04.019985098 +0000 UTC m=+0.215329521 container start b00c9920f25eca3171efe402eaf78e998e56a4de574b1ff919706af7f893e4bd (image=quay.io/ceph/ceph:v20, name=frosty_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec  3 16:08:04 np0005544708 podman[75501]: 2025-12-03 21:08:04.024616233 +0000 UTC m=+0.219960656 container attach b00c9920f25eca3171efe402eaf78e998e56a4de574b1ff919706af7f893e4bd (image=quay.io/ceph/ceph:v20, name=frosty_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:08:04 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'cephadm'
Dec  3 16:08:04 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec  3 16:08:04 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/121509719' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]: 
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]: {
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:    "fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:    "health": {
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:        "status": "HEALTH_OK",
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:        "checks": {},
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:        "mutes": []
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:    },
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:    "election_epoch": 5,
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:    "quorum": [
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:        0
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:    ],
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:    "quorum_names": [
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:        "compute-0"
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:    ],
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:    "quorum_age": 2,
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:    "monmap": {
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:        "epoch": 1,
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:        "min_mon_release_name": "tentacle",
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:        "num_mons": 1
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:    },
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:    "osdmap": {
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:        "epoch": 1,
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:        "num_osds": 0,
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:        "num_up_osds": 0,
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:        "osd_up_since": 0,
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:        "num_in_osds": 0,
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:        "osd_in_since": 0,
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:        "num_remapped_pgs": 0
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:    },
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:    "pgmap": {
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:        "pgs_by_state": [],
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:        "num_pgs": 0,
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:        "num_pools": 0,
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:        "num_objects": 0,
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:        "data_bytes": 0,
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:        "bytes_used": 0,
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:        "bytes_avail": 0,
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:        "bytes_total": 0
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:    },
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:    "fsmap": {
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:        "epoch": 1,
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:        "btime": "2025-12-03T21:07:59:373870+0000",
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:        "by_rank": [],
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:        "up:standby": 0
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:    },
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:    "mgrmap": {
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:        "available": false,
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:        "num_standbys": 0,
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:        "modules": [
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:            "iostat",
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:            "nfs"
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:        ],
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:        "services": {}
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:    },
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:    "servicemap": {
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:        "epoch": 1,
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:        "modified": "2025-12-03T21:07:59.377140+0000",
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:        "services": {}
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:    },
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]:    "progress_events": {}
Dec  3 16:08:04 np0005544708 frosty_kirch[75538]: }
Dec  3 16:08:04 np0005544708 systemd[1]: libpod-b00c9920f25eca3171efe402eaf78e998e56a4de574b1ff919706af7f893e4bd.scope: Deactivated successfully.
Dec  3 16:08:04 np0005544708 podman[75501]: 2025-12-03 21:08:04.255823457 +0000 UTC m=+0.451167880 container died b00c9920f25eca3171efe402eaf78e998e56a4de574b1ff919706af7f893e4bd (image=quay.io/ceph/ceph:v20, name=frosty_kirch, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:08:04 np0005544708 systemd[1]: var-lib-containers-storage-overlay-3ca44ce071279ffe8e8f3003bb58f867241c249356a01a566add5026a2cf113c-merged.mount: Deactivated successfully.
Dec  3 16:08:04 np0005544708 podman[75501]: 2025-12-03 21:08:04.302188514 +0000 UTC m=+0.497532917 container remove b00c9920f25eca3171efe402eaf78e998e56a4de574b1ff919706af7f893e4bd (image=quay.io/ceph/ceph:v20, name=frosty_kirch, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  3 16:08:04 np0005544708 systemd[1]: libpod-conmon-b00c9920f25eca3171efe402eaf78e998e56a4de574b1ff919706af7f893e4bd.scope: Deactivated successfully.
Dec  3 16:08:04 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'crash'
Dec  3 16:08:04 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'dashboard'
Dec  3 16:08:05 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'devicehealth'
Dec  3 16:08:05 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'diskprediction_local'
Dec  3 16:08:05 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mgr-compute-0-jxauqt[75496]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  3 16:08:05 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mgr-compute-0-jxauqt[75496]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  3 16:08:05 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mgr-compute-0-jxauqt[75496]:  from numpy import show_config as show_numpy_config
Dec  3 16:08:05 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'influx'
Dec  3 16:08:05 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'insights'
Dec  3 16:08:05 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'iostat'
Dec  3 16:08:06 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'k8sevents'
Dec  3 16:08:06 np0005544708 podman[75587]: 2025-12-03 21:08:06.405706108 +0000 UTC m=+0.067839011 container create 9fbb584e7041a0523ae33c5b6f57cb02c3d6d0fff872a534c3e619fc543c394c (image=quay.io/ceph/ceph:v20, name=laughing_kapitsa, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  3 16:08:06 np0005544708 systemd[1]: Started libpod-conmon-9fbb584e7041a0523ae33c5b6f57cb02c3d6d0fff872a534c3e619fc543c394c.scope.
Dec  3 16:08:06 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'localpool'
Dec  3 16:08:06 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:06 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b277afcaa35763dbdc6bf713cdfbadaf880e7676ccf3f62fb5ac0874433d08ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:06 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b277afcaa35763dbdc6bf713cdfbadaf880e7676ccf3f62fb5ac0874433d08ff/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:06 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b277afcaa35763dbdc6bf713cdfbadaf880e7676ccf3f62fb5ac0874433d08ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:06 np0005544708 podman[75587]: 2025-12-03 21:08:06.382328449 +0000 UTC m=+0.044461392 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:08:06 np0005544708 podman[75587]: 2025-12-03 21:08:06.489917242 +0000 UTC m=+0.152050155 container init 9fbb584e7041a0523ae33c5b6f57cb02c3d6d0fff872a534c3e619fc543c394c (image=quay.io/ceph/ceph:v20, name=laughing_kapitsa, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:08:06 np0005544708 podman[75587]: 2025-12-03 21:08:06.497277935 +0000 UTC m=+0.159410868 container start 9fbb584e7041a0523ae33c5b6f57cb02c3d6d0fff872a534c3e619fc543c394c (image=quay.io/ceph/ceph:v20, name=laughing_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec  3 16:08:06 np0005544708 podman[75587]: 2025-12-03 21:08:06.501503349 +0000 UTC m=+0.163636262 container attach 9fbb584e7041a0523ae33c5b6f57cb02c3d6d0fff872a534c3e619fc543c394c (image=quay.io/ceph/ceph:v20, name=laughing_kapitsa, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec  3 16:08:06 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'mds_autoscaler'
Dec  3 16:08:06 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec  3 16:08:06 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2727857518' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]: 
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]: {
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:    "fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:    "health": {
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:        "status": "HEALTH_OK",
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:        "checks": {},
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:        "mutes": []
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:    },
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:    "election_epoch": 5,
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:    "quorum": [
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:        0
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:    ],
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:    "quorum_names": [
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:        "compute-0"
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:    ],
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:    "quorum_age": 4,
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:    "monmap": {
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:        "epoch": 1,
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:        "min_mon_release_name": "tentacle",
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:        "num_mons": 1
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:    },
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:    "osdmap": {
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:        "epoch": 1,
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:        "num_osds": 0,
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:        "num_up_osds": 0,
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:        "osd_up_since": 0,
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:        "num_in_osds": 0,
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:        "osd_in_since": 0,
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:        "num_remapped_pgs": 0
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:    },
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:    "pgmap": {
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:        "pgs_by_state": [],
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:        "num_pgs": 0,
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:        "num_pools": 0,
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:        "num_objects": 0,
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:        "data_bytes": 0,
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:        "bytes_used": 0,
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:        "bytes_avail": 0,
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:        "bytes_total": 0
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:    },
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:    "fsmap": {
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:        "epoch": 1,
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:        "btime": "2025-12-03T21:07:59:373870+0000",
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:        "by_rank": [],
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:        "up:standby": 0
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:    },
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:    "mgrmap": {
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:        "available": false,
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:        "num_standbys": 0,
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:        "modules": [
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:            "iostat",
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:            "nfs"
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:        ],
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:        "services": {}
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:    },
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:    "servicemap": {
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:        "epoch": 1,
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:        "modified": "2025-12-03T21:07:59.377140+0000",
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:        "services": {}
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:    },
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]:    "progress_events": {}
Dec  3 16:08:06 np0005544708 laughing_kapitsa[75603]: }
Dec  3 16:08:06 np0005544708 systemd[1]: libpod-9fbb584e7041a0523ae33c5b6f57cb02c3d6d0fff872a534c3e619fc543c394c.scope: Deactivated successfully.
Dec  3 16:08:06 np0005544708 podman[75587]: 2025-12-03 21:08:06.727379381 +0000 UTC m=+0.389512284 container died 9fbb584e7041a0523ae33c5b6f57cb02c3d6d0fff872a534c3e619fc543c394c (image=quay.io/ceph/ceph:v20, name=laughing_kapitsa, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec  3 16:08:06 np0005544708 systemd[1]: var-lib-containers-storage-overlay-b277afcaa35763dbdc6bf713cdfbadaf880e7676ccf3f62fb5ac0874433d08ff-merged.mount: Deactivated successfully.
Dec  3 16:08:06 np0005544708 podman[75587]: 2025-12-03 21:08:06.776023215 +0000 UTC m=+0.438156108 container remove 9fbb584e7041a0523ae33c5b6f57cb02c3d6d0fff872a534c3e619fc543c394c (image=quay.io/ceph/ceph:v20, name=laughing_kapitsa, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec  3 16:08:06 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'mirroring'
Dec  3 16:08:06 np0005544708 systemd[1]: libpod-conmon-9fbb584e7041a0523ae33c5b6f57cb02c3d6d0fff872a534c3e619fc543c394c.scope: Deactivated successfully.
Dec  3 16:08:06 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'nfs'
Dec  3 16:08:07 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'orchestrator'
Dec  3 16:08:07 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'osd_perf_query'
Dec  3 16:08:07 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'osd_support'
Dec  3 16:08:07 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'pg_autoscaler'
Dec  3 16:08:07 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'progress'
Dec  3 16:08:07 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'prometheus'
Dec  3 16:08:07 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'rbd_support'
Dec  3 16:08:08 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'rgw'
Dec  3 16:08:08 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'rook'
Dec  3 16:08:08 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'selftest'
Dec  3 16:08:08 np0005544708 podman[75641]: 2025-12-03 21:08:08.882782049 +0000 UTC m=+0.073775187 container create d0ca0f7bed9c496f975e44266c534d42743ab5f396b8a936c712ed096c2cf0eb (image=quay.io/ceph/ceph:v20, name=naughty_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec  3 16:08:08 np0005544708 systemd[1]: Started libpod-conmon-d0ca0f7bed9c496f975e44266c534d42743ab5f396b8a936c712ed096c2cf0eb.scope.
Dec  3 16:08:08 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'smb'
Dec  3 16:08:08 np0005544708 podman[75641]: 2025-12-03 21:08:08.850980892 +0000 UTC m=+0.041974100 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:08:08 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:08 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/691a5aafc96b94094f44d25792e17c888960584ed12150e4ada5255d079f3c69/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:08 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/691a5aafc96b94094f44d25792e17c888960584ed12150e4ada5255d079f3c69/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:08 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/691a5aafc96b94094f44d25792e17c888960584ed12150e4ada5255d079f3c69/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:08 np0005544708 podman[75641]: 2025-12-03 21:08:08.977217767 +0000 UTC m=+0.168210955 container init d0ca0f7bed9c496f975e44266c534d42743ab5f396b8a936c712ed096c2cf0eb (image=quay.io/ceph/ceph:v20, name=naughty_dijkstra, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec  3 16:08:08 np0005544708 podman[75641]: 2025-12-03 21:08:08.984456406 +0000 UTC m=+0.175449544 container start d0ca0f7bed9c496f975e44266c534d42743ab5f396b8a936c712ed096c2cf0eb (image=quay.io/ceph/ceph:v20, name=naughty_dijkstra, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:08:08 np0005544708 podman[75641]: 2025-12-03 21:08:08.988909036 +0000 UTC m=+0.179902164 container attach d0ca0f7bed9c496f975e44266c534d42743ab5f396b8a936c712ed096c2cf0eb (image=quay.io/ceph/ceph:v20, name=naughty_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec  3 16:08:09 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec  3 16:08:09 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/829808550' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]: 
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]: {
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:    "fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:    "health": {
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:        "status": "HEALTH_OK",
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:        "checks": {},
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:        "mutes": []
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:    },
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:    "election_epoch": 5,
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:    "quorum": [
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:        0
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:    ],
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:    "quorum_names": [
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:        "compute-0"
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:    ],
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:    "quorum_age": 7,
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:    "monmap": {
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:        "epoch": 1,
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:        "min_mon_release_name": "tentacle",
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:        "num_mons": 1
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:    },
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:    "osdmap": {
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:        "epoch": 1,
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:        "num_osds": 0,
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:        "num_up_osds": 0,
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:        "osd_up_since": 0,
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:        "num_in_osds": 0,
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:        "osd_in_since": 0,
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:        "num_remapped_pgs": 0
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:    },
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:    "pgmap": {
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:        "pgs_by_state": [],
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:        "num_pgs": 0,
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:        "num_pools": 0,
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:        "num_objects": 0,
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:        "data_bytes": 0,
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:        "bytes_used": 0,
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:        "bytes_avail": 0,
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:        "bytes_total": 0
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:    },
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:    "fsmap": {
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:        "epoch": 1,
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:        "btime": "2025-12-03T21:07:59:373870+0000",
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:        "by_rank": [],
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:        "up:standby": 0
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:    },
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:    "mgrmap": {
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:        "available": false,
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:        "num_standbys": 0,
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:        "modules": [
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:            "iostat",
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:            "nfs"
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:        ],
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:        "services": {}
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:    },
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:    "servicemap": {
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:        "epoch": 1,
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:        "modified": "2025-12-03T21:07:59.377140+0000",
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:        "services": {}
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:    },
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]:    "progress_events": {}
Dec  3 16:08:09 np0005544708 naughty_dijkstra[75658]: }
Dec  3 16:08:09 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'snap_schedule'
Dec  3 16:08:09 np0005544708 systemd[1]: libpod-d0ca0f7bed9c496f975e44266c534d42743ab5f396b8a936c712ed096c2cf0eb.scope: Deactivated successfully.
Dec  3 16:08:09 np0005544708 podman[75641]: 2025-12-03 21:08:09.217317371 +0000 UTC m=+0.408310509 container died d0ca0f7bed9c496f975e44266c534d42743ab5f396b8a936c712ed096c2cf0eb (image=quay.io/ceph/ceph:v20, name=naughty_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:08:09 np0005544708 systemd[1]: var-lib-containers-storage-overlay-691a5aafc96b94094f44d25792e17c888960584ed12150e4ada5255d079f3c69-merged.mount: Deactivated successfully.
Dec  3 16:08:09 np0005544708 podman[75641]: 2025-12-03 21:08:09.273483991 +0000 UTC m=+0.464477129 container remove d0ca0f7bed9c496f975e44266c534d42743ab5f396b8a936c712ed096c2cf0eb (image=quay.io/ceph/ceph:v20, name=naughty_dijkstra, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:08:09 np0005544708 systemd[1]: libpod-conmon-d0ca0f7bed9c496f975e44266c534d42743ab5f396b8a936c712ed096c2cf0eb.scope: Deactivated successfully.
Dec  3 16:08:09 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'stats'
Dec  3 16:08:09 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'status'
Dec  3 16:08:09 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'telegraf'
Dec  3 16:08:09 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'telemetry'
Dec  3 16:08:09 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'test_orchestrator'
Dec  3 16:08:09 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'volumes'
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: ms_deliver_dispatch: unhandled message 0x56347b571860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec  3 16:08:10 np0005544708 ceph-mon[75204]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.jxauqt
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: mgr handle_mgr_map Activating!
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: mgr handle_mgr_map I am now activating
Dec  3 16:08:10 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.jxauqt(active, starting, since 0.012688s)
Dec  3 16:08:10 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec  3 16:08:10 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/355953899' entity='mgr.compute-0.jxauqt' cmd={"prefix": "mds metadata"} : dispatch
Dec  3 16:08:10 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).mds e1 all = 1
Dec  3 16:08:10 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec  3 16:08:10 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/355953899' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd metadata"} : dispatch
Dec  3 16:08:10 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec  3 16:08:10 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/355953899' entity='mgr.compute-0.jxauqt' cmd={"prefix": "mon metadata"} : dispatch
Dec  3 16:08:10 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  3 16:08:10 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/355953899' entity='mgr.compute-0.jxauqt' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec  3 16:08:10 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.jxauqt", "id": "compute-0.jxauqt"} v 0)
Dec  3 16:08:10 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/355953899' entity='mgr.compute-0.jxauqt' cmd={"prefix": "mgr metadata", "who": "compute-0.jxauqt", "id": "compute-0.jxauqt"} : dispatch
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: mgr load Constructed class from module: balancer
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: [balancer INFO root] Starting
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: mgr load Constructed class from module: crash
Dec  3 16:08:10 np0005544708 ceph-mon[75204]: log_channel(cluster) log [INF] : Manager daemon compute-0.jxauqt is now available
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: [balancer INFO root] Optimize plan auto_2025-12-03_21:08:10
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: [balancer INFO root] do_upmap
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: [balancer INFO root] No pools available
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: mgr load Constructed class from module: devicehealth
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: [devicehealth INFO root] Starting
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: mgr load Constructed class from module: iostat
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: mgr load Constructed class from module: nfs
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: mgr load Constructed class from module: orchestrator
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: mgr load Constructed class from module: pg_autoscaler
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: mgr load Constructed class from module: progress
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: [progress INFO root] Loading...
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: [progress INFO root] No stored events to load
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: [progress INFO root] Loaded [] historic events
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: [progress INFO root] Loaded OSDMap, ready.
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] recovery thread starting
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] starting setup
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: mgr load Constructed class from module: rbd_support
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: mgr load Constructed class from module: status
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: mgr load Constructed class from module: telemetry
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 16:08:10 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.jxauqt/mirror_snapshot_schedule"} v 0)
Dec  3 16:08:10 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/355953899' entity='mgr.compute-0.jxauqt' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.jxauqt/mirror_snapshot_schedule"} : dispatch
Dec  3 16:08:10 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] PerfHandler: starting
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] TaskHandler: starting
Dec  3 16:08:10 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/355953899' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:10 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.jxauqt/trash_purge_schedule"} v 0)
Dec  3 16:08:10 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/355953899' entity='mgr.compute-0.jxauqt' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.jxauqt/trash_purge_schedule"} : dispatch
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 16:08:10 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] setup complete
Dec  3 16:08:10 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/355953899' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:10 np0005544708 ceph-mgr[75500]: mgr load Constructed class from module: volumes
Dec  3 16:08:10 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Dec  3 16:08:10 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/355953899' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:10 np0005544708 ceph-mon[75204]: Activating manager daemon compute-0.jxauqt
Dec  3 16:08:10 np0005544708 ceph-mon[75204]: Manager daemon compute-0.jxauqt is now available
Dec  3 16:08:10 np0005544708 ceph-mon[75204]: from='mgr.14102 192.168.122.100:0/355953899' entity='mgr.compute-0.jxauqt' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.jxauqt/mirror_snapshot_schedule"} : dispatch
Dec  3 16:08:10 np0005544708 ceph-mon[75204]: from='mgr.14102 192.168.122.100:0/355953899' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:10 np0005544708 ceph-mon[75204]: from='mgr.14102 192.168.122.100:0/355953899' entity='mgr.compute-0.jxauqt' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.jxauqt/trash_purge_schedule"} : dispatch
Dec  3 16:08:10 np0005544708 ceph-mon[75204]: from='mgr.14102 192.168.122.100:0/355953899' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:10 np0005544708 ceph-mon[75204]: from='mgr.14102 192.168.122.100:0/355953899' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:11 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.jxauqt(active, since 1.02768s)
Dec  3 16:08:11 np0005544708 podman[75774]: 2025-12-03 21:08:11.377563499 +0000 UTC m=+0.070424085 container create 365945f97c1efdeea8e37357856d7fe61a63bcba1d09a71873613d28b9e1fc8f (image=quay.io/ceph/ceph:v20, name=jovial_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:08:11 np0005544708 systemd[1]: Started libpod-conmon-365945f97c1efdeea8e37357856d7fe61a63bcba1d09a71873613d28b9e1fc8f.scope.
Dec  3 16:08:11 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:11 np0005544708 podman[75774]: 2025-12-03 21:08:11.34894817 +0000 UTC m=+0.041808806 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:08:11 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c7bf5af164041e03e645a94edd7926ba2d9933da01eeaf2bc9f8a516d63294/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:11 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c7bf5af164041e03e645a94edd7926ba2d9933da01eeaf2bc9f8a516d63294/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:11 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c7bf5af164041e03e645a94edd7926ba2d9933da01eeaf2bc9f8a516d63294/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:11 np0005544708 podman[75774]: 2025-12-03 21:08:11.452424902 +0000 UTC m=+0.145285468 container init 365945f97c1efdeea8e37357856d7fe61a63bcba1d09a71873613d28b9e1fc8f (image=quay.io/ceph/ceph:v20, name=jovial_ptolemy, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  3 16:08:11 np0005544708 podman[75774]: 2025-12-03 21:08:11.456831521 +0000 UTC m=+0.149692067 container start 365945f97c1efdeea8e37357856d7fe61a63bcba1d09a71873613d28b9e1fc8f (image=quay.io/ceph/ceph:v20, name=jovial_ptolemy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:08:11 np0005544708 podman[75774]: 2025-12-03 21:08:11.459927177 +0000 UTC m=+0.152787733 container attach 365945f97c1efdeea8e37357856d7fe61a63bcba1d09a71873613d28b9e1fc8f (image=quay.io/ceph/ceph:v20, name=jovial_ptolemy, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec  3 16:08:11 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec  3 16:08:11 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2867261980' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]: 
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]: {
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:    "fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:    "health": {
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:        "status": "HEALTH_OK",
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:        "checks": {},
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:        "mutes": []
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:    },
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:    "election_epoch": 5,
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:    "quorum": [
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:        0
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:    ],
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:    "quorum_names": [
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:        "compute-0"
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:    ],
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:    "quorum_age": 10,
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:    "monmap": {
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:        "epoch": 1,
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:        "min_mon_release_name": "tentacle",
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:        "num_mons": 1
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:    },
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:    "osdmap": {
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:        "epoch": 1,
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:        "num_osds": 0,
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:        "num_up_osds": 0,
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:        "osd_up_since": 0,
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:        "num_in_osds": 0,
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:        "osd_in_since": 0,
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:        "num_remapped_pgs": 0
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:    },
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:    "pgmap": {
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:        "pgs_by_state": [],
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:        "num_pgs": 0,
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:        "num_pools": 0,
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:        "num_objects": 0,
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:        "data_bytes": 0,
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:        "bytes_used": 0,
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:        "bytes_avail": 0,
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:        "bytes_total": 0
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:    },
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:    "fsmap": {
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:        "epoch": 1,
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:        "btime": "2025-12-03T21:07:59:373870+0000",
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:        "by_rank": [],
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:        "up:standby": 0
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:    },
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:    "mgrmap": {
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:        "available": true,
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:        "num_standbys": 0,
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:        "modules": [
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:            "iostat",
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:            "nfs"
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:        ],
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:        "services": {}
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:    },
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:    "servicemap": {
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:        "epoch": 1,
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:        "modified": "2025-12-03T21:07:59.377140+0000",
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:        "services": {}
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:    },
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]:    "progress_events": {}
Dec  3 16:08:11 np0005544708 jovial_ptolemy[75790]: }
Dec  3 16:08:11 np0005544708 systemd[1]: libpod-365945f97c1efdeea8e37357856d7fe61a63bcba1d09a71873613d28b9e1fc8f.scope: Deactivated successfully.
Dec  3 16:08:11 np0005544708 podman[75774]: 2025-12-03 21:08:11.981636952 +0000 UTC m=+0.674497498 container died 365945f97c1efdeea8e37357856d7fe61a63bcba1d09a71873613d28b9e1fc8f (image=quay.io/ceph/ceph:v20, name=jovial_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True)
Dec  3 16:08:12 np0005544708 systemd[1]: var-lib-containers-storage-overlay-f4c7bf5af164041e03e645a94edd7926ba2d9933da01eeaf2bc9f8a516d63294-merged.mount: Deactivated successfully.
Dec  3 16:08:12 np0005544708 podman[75774]: 2025-12-03 21:08:12.033942697 +0000 UTC m=+0.726803283 container remove 365945f97c1efdeea8e37357856d7fe61a63bcba1d09a71873613d28b9e1fc8f (image=quay.io/ceph/ceph:v20, name=jovial_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  3 16:08:12 np0005544708 systemd[1]: libpod-conmon-365945f97c1efdeea8e37357856d7fe61a63bcba1d09a71873613d28b9e1fc8f.scope: Deactivated successfully.
Dec  3 16:08:12 np0005544708 podman[75829]: 2025-12-03 21:08:12.134505447 +0000 UTC m=+0.068519587 container create c1a0a7c3309e80aae883d82985494f464a34503b9f45b64aa543f8a6caae0c69 (image=quay.io/ceph/ceph:v20, name=amazing_banzai, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec  3 16:08:12 np0005544708 ceph-mgr[75500]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  3 16:08:12 np0005544708 systemd[1]: Started libpod-conmon-c1a0a7c3309e80aae883d82985494f464a34503b9f45b64aa543f8a6caae0c69.scope.
Dec  3 16:08:12 np0005544708 ceph-mgr[75500]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  3 16:08:12 np0005544708 podman[75829]: 2025-12-03 21:08:12.105018496 +0000 UTC m=+0.039032676 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:08:12 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.jxauqt(active, since 2s)
Dec  3 16:08:12 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:12 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbbcfbe208bd9ca0709a6492187b58d9c27119f76e0a809027689a9e341ec9c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:12 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbbcfbe208bd9ca0709a6492187b58d9c27119f76e0a809027689a9e341ec9c5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:12 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbbcfbe208bd9ca0709a6492187b58d9c27119f76e0a809027689a9e341ec9c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:12 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbbcfbe208bd9ca0709a6492187b58d9c27119f76e0a809027689a9e341ec9c5/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:12 np0005544708 podman[75829]: 2025-12-03 21:08:12.237285852 +0000 UTC m=+0.171299992 container init c1a0a7c3309e80aae883d82985494f464a34503b9f45b64aa543f8a6caae0c69 (image=quay.io/ceph/ceph:v20, name=amazing_banzai, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:08:12 np0005544708 podman[75829]: 2025-12-03 21:08:12.247834742 +0000 UTC m=+0.181848872 container start c1a0a7c3309e80aae883d82985494f464a34503b9f45b64aa543f8a6caae0c69 (image=quay.io/ceph/ceph:v20, name=amazing_banzai, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec  3 16:08:12 np0005544708 podman[75829]: 2025-12-03 21:08:12.252137119 +0000 UTC m=+0.186151309 container attach c1a0a7c3309e80aae883d82985494f464a34503b9f45b64aa543f8a6caae0c69 (image=quay.io/ceph/ceph:v20, name=amazing_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:08:12 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec  3 16:08:12 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1362526996' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec  3 16:08:12 np0005544708 amazing_banzai[75846]: 
Dec  3 16:08:12 np0005544708 amazing_banzai[75846]: [global]
Dec  3 16:08:12 np0005544708 amazing_banzai[75846]: #011fsid = c21de27e-a7fd-594b-8324-0697ba9aab3a
Dec  3 16:08:12 np0005544708 amazing_banzai[75846]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec  3 16:08:12 np0005544708 amazing_banzai[75846]: #011osd_crush_chooseleaf_type = 0
Dec  3 16:08:12 np0005544708 systemd[1]: libpod-c1a0a7c3309e80aae883d82985494f464a34503b9f45b64aa543f8a6caae0c69.scope: Deactivated successfully.
Dec  3 16:08:12 np0005544708 podman[75829]: 2025-12-03 21:08:12.677654353 +0000 UTC m=+0.611668513 container died c1a0a7c3309e80aae883d82985494f464a34503b9f45b64aa543f8a6caae0c69 (image=quay.io/ceph/ceph:v20, name=amazing_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 16:08:12 np0005544708 systemd[1]: var-lib-containers-storage-overlay-bbbcfbe208bd9ca0709a6492187b58d9c27119f76e0a809027689a9e341ec9c5-merged.mount: Deactivated successfully.
Dec  3 16:08:12 np0005544708 podman[75829]: 2025-12-03 21:08:12.71837187 +0000 UTC m=+0.652385970 container remove c1a0a7c3309e80aae883d82985494f464a34503b9f45b64aa543f8a6caae0c69 (image=quay.io/ceph/ceph:v20, name=amazing_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 16:08:12 np0005544708 systemd[1]: libpod-conmon-c1a0a7c3309e80aae883d82985494f464a34503b9f45b64aa543f8a6caae0c69.scope: Deactivated successfully.
Dec  3 16:08:12 np0005544708 podman[75885]: 2025-12-03 21:08:12.800495574 +0000 UTC m=+0.056712785 container create ea5540398642d6005d3abdf2cf973eb64190e828c8d36b15725a5a325a410572 (image=quay.io/ceph/ceph:v20, name=amazing_yalow, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:08:12 np0005544708 systemd[1]: Started libpod-conmon-ea5540398642d6005d3abdf2cf973eb64190e828c8d36b15725a5a325a410572.scope.
Dec  3 16:08:12 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:12 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2b6f5265776cd1e7e8c0e3843cfa5075ce901cd1499302428e39487089036c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:12 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2b6f5265776cd1e7e8c0e3843cfa5075ce901cd1499302428e39487089036c4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:12 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2b6f5265776cd1e7e8c0e3843cfa5075ce901cd1499302428e39487089036c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:12 np0005544708 podman[75885]: 2025-12-03 21:08:12.772522392 +0000 UTC m=+0.028739643 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:08:12 np0005544708 podman[75885]: 2025-12-03 21:08:12.88314435 +0000 UTC m=+0.139361601 container init ea5540398642d6005d3abdf2cf973eb64190e828c8d36b15725a5a325a410572 (image=quay.io/ceph/ceph:v20, name=amazing_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Dec  3 16:08:12 np0005544708 podman[75885]: 2025-12-03 21:08:12.892621285 +0000 UTC m=+0.148838456 container start ea5540398642d6005d3abdf2cf973eb64190e828c8d36b15725a5a325a410572 (image=quay.io/ceph/ceph:v20, name=amazing_yalow, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:08:12 np0005544708 podman[75885]: 2025-12-03 21:08:12.896561852 +0000 UTC m=+0.152779043 container attach ea5540398642d6005d3abdf2cf973eb64190e828c8d36b15725a5a325a410572 (image=quay.io/ceph/ceph:v20, name=amazing_yalow, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  3 16:08:13 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/1362526996' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec  3 16:08:13 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Dec  3 16:08:13 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1711009155' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Dec  3 16:08:14 np0005544708 ceph-mgr[75500]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  3 16:08:14 np0005544708 ceph-mgr[75500]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  3 16:08:14 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/1711009155' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Dec  3 16:08:14 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1711009155' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec  3 16:08:14 np0005544708 ceph-mgr[75500]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec  3 16:08:14 np0005544708 ceph-mgr[75500]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec  3 16:08:14 np0005544708 ceph-mgr[75500]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec  3 16:08:14 np0005544708 ceph-mgr[75500]: mgr respawn  1: '-n'
Dec  3 16:08:14 np0005544708 ceph-mgr[75500]: mgr respawn  2: 'mgr.compute-0.jxauqt'
Dec  3 16:08:14 np0005544708 ceph-mgr[75500]: mgr respawn  3: '-f'
Dec  3 16:08:14 np0005544708 ceph-mgr[75500]: mgr respawn  4: '--setuser'
Dec  3 16:08:14 np0005544708 ceph-mgr[75500]: mgr respawn  5: 'ceph'
Dec  3 16:08:14 np0005544708 ceph-mgr[75500]: mgr respawn  6: '--setgroup'
Dec  3 16:08:14 np0005544708 ceph-mgr[75500]: mgr respawn  7: 'ceph'
Dec  3 16:08:14 np0005544708 ceph-mgr[75500]: mgr respawn  8: '--default-log-to-file=false'
Dec  3 16:08:14 np0005544708 ceph-mgr[75500]: mgr respawn  9: '--default-log-to-journald=true'
Dec  3 16:08:14 np0005544708 ceph-mgr[75500]: mgr respawn  10: '--default-log-to-stderr=false'
Dec  3 16:08:14 np0005544708 ceph-mgr[75500]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec  3 16:08:14 np0005544708 ceph-mgr[75500]: mgr respawn  exe_path /proc/self/exe
Dec  3 16:08:14 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.jxauqt(active, since 4s)
Dec  3 16:08:14 np0005544708 systemd[1]: libpod-ea5540398642d6005d3abdf2cf973eb64190e828c8d36b15725a5a325a410572.scope: Deactivated successfully.
Dec  3 16:08:14 np0005544708 podman[75885]: 2025-12-03 21:08:14.374878068 +0000 UTC m=+1.631095259 container died ea5540398642d6005d3abdf2cf973eb64190e828c8d36b15725a5a325a410572 (image=quay.io/ceph/ceph:v20, name=amazing_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec  3 16:08:14 np0005544708 systemd[1]: var-lib-containers-storage-overlay-d2b6f5265776cd1e7e8c0e3843cfa5075ce901cd1499302428e39487089036c4-merged.mount: Deactivated successfully.
Dec  3 16:08:14 np0005544708 podman[75885]: 2025-12-03 21:08:14.416681453 +0000 UTC m=+1.672898634 container remove ea5540398642d6005d3abdf2cf973eb64190e828c8d36b15725a5a325a410572 (image=quay.io/ceph/ceph:v20, name=amazing_yalow, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 16:08:14 np0005544708 systemd[1]: libpod-conmon-ea5540398642d6005d3abdf2cf973eb64190e828c8d36b15725a5a325a410572.scope: Deactivated successfully.
Dec  3 16:08:14 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mgr-compute-0-jxauqt[75496]: ignoring --setuser ceph since I am not root
Dec  3 16:08:14 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mgr-compute-0-jxauqt[75496]: ignoring --setgroup ceph since I am not root
Dec  3 16:08:14 np0005544708 ceph-mgr[75500]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Dec  3 16:08:14 np0005544708 ceph-mgr[75500]: pidfile_write: ignore empty --pid-file
Dec  3 16:08:14 np0005544708 podman[75938]: 2025-12-03 21:08:14.497256317 +0000 UTC m=+0.058951361 container create 92f34ac4fc50af42c3d00b6f08f8b8efd2852d65de3e90ba1848191eea754aa2 (image=quay.io/ceph/ceph:v20, name=unruffled_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:08:14 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'alerts'
Dec  3 16:08:14 np0005544708 systemd[1]: Started libpod-conmon-92f34ac4fc50af42c3d00b6f08f8b8efd2852d65de3e90ba1848191eea754aa2.scope.
Dec  3 16:08:14 np0005544708 podman[75938]: 2025-12-03 21:08:14.469829879 +0000 UTC m=+0.031524993 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:08:14 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:14 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3df54f66bb4e31dfaedb7c2751db292414d2130a21981565c0e267c097294dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:14 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3df54f66bb4e31dfaedb7c2751db292414d2130a21981565c0e267c097294dc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:14 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3df54f66bb4e31dfaedb7c2751db292414d2130a21981565c0e267c097294dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:14 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'balancer'
Dec  3 16:08:14 np0005544708 podman[75938]: 2025-12-03 21:08:14.602226596 +0000 UTC m=+0.163921670 container init 92f34ac4fc50af42c3d00b6f08f8b8efd2852d65de3e90ba1848191eea754aa2 (image=quay.io/ceph/ceph:v20, name=unruffled_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 16:08:14 np0005544708 podman[75938]: 2025-12-03 21:08:14.609426514 +0000 UTC m=+0.171121578 container start 92f34ac4fc50af42c3d00b6f08f8b8efd2852d65de3e90ba1848191eea754aa2 (image=quay.io/ceph/ceph:v20, name=unruffled_lehmann, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  3 16:08:14 np0005544708 podman[75938]: 2025-12-03 21:08:14.613637788 +0000 UTC m=+0.175332822 container attach 92f34ac4fc50af42c3d00b6f08f8b8efd2852d65de3e90ba1848191eea754aa2 (image=quay.io/ceph/ceph:v20, name=unruffled_lehmann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Dec  3 16:08:14 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'cephadm'
Dec  3 16:08:15 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec  3 16:08:15 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1638651339' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec  3 16:08:15 np0005544708 unruffled_lehmann[75974]: {
Dec  3 16:08:15 np0005544708 unruffled_lehmann[75974]:    "epoch": 5,
Dec  3 16:08:15 np0005544708 unruffled_lehmann[75974]:    "available": true,
Dec  3 16:08:15 np0005544708 unruffled_lehmann[75974]:    "active_name": "compute-0.jxauqt",
Dec  3 16:08:15 np0005544708 unruffled_lehmann[75974]:    "num_standby": 0
Dec  3 16:08:15 np0005544708 unruffled_lehmann[75974]: }
Dec  3 16:08:15 np0005544708 systemd[1]: libpod-92f34ac4fc50af42c3d00b6f08f8b8efd2852d65de3e90ba1848191eea754aa2.scope: Deactivated successfully.
Dec  3 16:08:15 np0005544708 podman[75938]: 2025-12-03 21:08:15.125642463 +0000 UTC m=+0.687337497 container died 92f34ac4fc50af42c3d00b6f08f8b8efd2852d65de3e90ba1848191eea754aa2 (image=quay.io/ceph/ceph:v20, name=unruffled_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  3 16:08:15 np0005544708 systemd[1]: var-lib-containers-storage-overlay-f3df54f66bb4e31dfaedb7c2751db292414d2130a21981565c0e267c097294dc-merged.mount: Deactivated successfully.
Dec  3 16:08:15 np0005544708 podman[75938]: 2025-12-03 21:08:15.160770153 +0000 UTC m=+0.722465187 container remove 92f34ac4fc50af42c3d00b6f08f8b8efd2852d65de3e90ba1848191eea754aa2 (image=quay.io/ceph/ceph:v20, name=unruffled_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:08:15 np0005544708 systemd[1]: libpod-conmon-92f34ac4fc50af42c3d00b6f08f8b8efd2852d65de3e90ba1848191eea754aa2.scope: Deactivated successfully.
Dec  3 16:08:15 np0005544708 podman[76023]: 2025-12-03 21:08:15.218718917 +0000 UTC m=+0.040011521 container create 6f285a3045bc7e29d550ccf3a12dbf0d925dd7e92be59df31f4277037bc96799 (image=quay.io/ceph/ceph:v20, name=agitated_solomon, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:08:15 np0005544708 systemd[1]: Started libpod-conmon-6f285a3045bc7e29d550ccf3a12dbf0d925dd7e92be59df31f4277037bc96799.scope.
Dec  3 16:08:15 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:15 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb8808b3f871a4539265f66bc53d7bbb252ede060b9af73bba2e0b3b3851f3e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:15 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb8808b3f871a4539265f66bc53d7bbb252ede060b9af73bba2e0b3b3851f3e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:15 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb8808b3f871a4539265f66bc53d7bbb252ede060b9af73bba2e0b3b3851f3e8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:15 np0005544708 podman[76023]: 2025-12-03 21:08:15.284723852 +0000 UTC m=+0.106016486 container init 6f285a3045bc7e29d550ccf3a12dbf0d925dd7e92be59df31f4277037bc96799 (image=quay.io/ceph/ceph:v20, name=agitated_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:08:15 np0005544708 podman[76023]: 2025-12-03 21:08:15.294411071 +0000 UTC m=+0.115703675 container start 6f285a3045bc7e29d550ccf3a12dbf0d925dd7e92be59df31f4277037bc96799 (image=quay.io/ceph/ceph:v20, name=agitated_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:08:15 np0005544708 podman[76023]: 2025-12-03 21:08:15.200795604 +0000 UTC m=+0.022088248 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:08:15 np0005544708 podman[76023]: 2025-12-03 21:08:15.297880237 +0000 UTC m=+0.119172841 container attach 6f285a3045bc7e29d550ccf3a12dbf0d925dd7e92be59df31f4277037bc96799 (image=quay.io/ceph/ceph:v20, name=agitated_solomon, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:08:15 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/1711009155' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec  3 16:08:15 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'crash'
Dec  3 16:08:15 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'dashboard'
Dec  3 16:08:16 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'devicehealth'
Dec  3 16:08:16 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'diskprediction_local'
Dec  3 16:08:16 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mgr-compute-0-jxauqt[75496]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  3 16:08:16 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mgr-compute-0-jxauqt[75496]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  3 16:08:16 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mgr-compute-0-jxauqt[75496]:  from numpy import show_config as show_numpy_config
Dec  3 16:08:16 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'influx'
Dec  3 16:08:16 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'insights'
Dec  3 16:08:16 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'iostat'
Dec  3 16:08:16 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'k8sevents'
Dec  3 16:08:17 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'localpool'
Dec  3 16:08:17 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'mds_autoscaler'
Dec  3 16:08:17 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'mirroring'
Dec  3 16:08:17 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'nfs'
Dec  3 16:08:17 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'orchestrator'
Dec  3 16:08:18 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'osd_perf_query'
Dec  3 16:08:18 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'osd_support'
Dec  3 16:08:18 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'pg_autoscaler'
Dec  3 16:08:18 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'progress'
Dec  3 16:08:18 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'prometheus'
Dec  3 16:08:18 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'rbd_support'
Dec  3 16:08:18 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'rgw'
Dec  3 16:08:19 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'rook'
Dec  3 16:08:19 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'selftest'
Dec  3 16:08:19 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'smb'
Dec  3 16:08:20 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'snap_schedule'
Dec  3 16:08:20 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'stats'
Dec  3 16:08:20 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'status'
Dec  3 16:08:20 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'telegraf'
Dec  3 16:08:20 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'telemetry'
Dec  3 16:08:20 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'test_orchestrator'
Dec  3 16:08:20 np0005544708 ceph-mgr[75500]: mgr[py] Loading python module 'volumes'
Dec  3 16:08:21 np0005544708 ceph-mon[75204]: log_channel(cluster) log [INF] : Active manager daemon compute-0.jxauqt restarted
Dec  3 16:08:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Dec  3 16:08:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  3 16:08:21 np0005544708 ceph-mon[75204]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.jxauqt
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: ms_deliver_dispatch: unhandled message 0x55f9c7438000 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec  3 16:08:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.2 inc ratio 0.4 full ratio 0.4
Dec  3 16:08:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec  3 16:08:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: mgr handle_mgr_map Activating!
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: mgr handle_mgr_map I am now activating
Dec  3 16:08:21 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Dec  3 16:08:21 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.jxauqt(active, starting, since 0.0584135s)
Dec  3 16:08:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  3 16:08:21 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec  3 16:08:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.jxauqt", "id": "compute-0.jxauqt"} v 0)
Dec  3 16:08:21 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "mgr metadata", "who": "compute-0.jxauqt", "id": "compute-0.jxauqt"} : dispatch
Dec  3 16:08:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec  3 16:08:21 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "mds metadata"} : dispatch
Dec  3 16:08:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).mds e1 all = 1
Dec  3 16:08:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec  3 16:08:21 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd metadata"} : dispatch
Dec  3 16:08:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec  3 16:08:21 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "mon metadata"} : dispatch
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: mgr load Constructed class from module: balancer
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Starting
Dec  3 16:08:21 np0005544708 ceph-mon[75204]: log_channel(cluster) log [INF] : Manager daemon compute-0.jxauqt is now available
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Optimize plan auto_2025-12-03_21:08:21
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] do_upmap
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] No pools available
Dec  3 16:08:21 np0005544708 ceph-mon[75204]: Active manager daemon compute-0.jxauqt restarted
Dec  3 16:08:21 np0005544708 ceph-mon[75204]: Activating manager daemon compute-0.jxauqt
Dec  3 16:08:21 np0005544708 ceph-mon[75204]: Manager daemon compute-0.jxauqt is now available
Dec  3 16:08:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.cephadm_root_ca_cert}] v 0)
Dec  3 16:08:21 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.cephadm_root_ca_key}] v 0)
Dec  3 16:08:21 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Dec  3 16:08:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Dec  3 16:08:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019914911 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:08:21 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Dec  3 16:08:21 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: mgr load Constructed class from module: cephadm
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: mgr load Constructed class from module: crash
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: mgr load Constructed class from module: devicehealth
Dec  3 16:08:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  3 16:08:21 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: mgr load Constructed class from module: iostat
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: mgr load Constructed class from module: nfs
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: [devicehealth INFO root] Starting
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: mgr load Constructed class from module: orchestrator
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: mgr load Constructed class from module: pg_autoscaler
Dec  3 16:08:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  3 16:08:21 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: mgr load Constructed class from module: progress
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: [progress INFO root] Loading...
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: [progress INFO root] No stored events to load
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: [progress INFO root] Loaded [] historic events
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: [progress INFO root] Loaded OSDMap, ready.
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] recovery thread starting
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] starting setup
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: mgr load Constructed class from module: rbd_support
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: mgr load Constructed class from module: status
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: mgr load Constructed class from module: telemetry
Dec  3 16:08:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.jxauqt/mirror_snapshot_schedule"} v 0)
Dec  3 16:08:21 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.jxauqt/mirror_snapshot_schedule"} : dispatch
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] PerfHandler: starting
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] TaskHandler: starting
Dec  3 16:08:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.jxauqt/trash_purge_schedule"} v 0)
Dec  3 16:08:21 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.jxauqt/trash_purge_schedule"} : dispatch
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] setup complete
Dec  3 16:08:21 np0005544708 ceph-mgr[75500]: mgr load Constructed class from module: volumes
Dec  3 16:08:22 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.jxauqt(active, since 1.06777s)
Dec  3 16:08:22 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Dec  3 16:08:22 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Dec  3 16:08:22 np0005544708 agitated_solomon[76039]: {
Dec  3 16:08:22 np0005544708 agitated_solomon[76039]:    "mgrmap_epoch": 7,
Dec  3 16:08:22 np0005544708 agitated_solomon[76039]:    "initialized": true
Dec  3 16:08:22 np0005544708 agitated_solomon[76039]: }
Dec  3 16:08:22 np0005544708 systemd[1]: libpod-6f285a3045bc7e29d550ccf3a12dbf0d925dd7e92be59df31f4277037bc96799.scope: Deactivated successfully.
Dec  3 16:08:22 np0005544708 podman[76023]: 2025-12-03 21:08:22.161874578 +0000 UTC m=+6.983167212 container died 6f285a3045bc7e29d550ccf3a12dbf0d925dd7e92be59df31f4277037bc96799 (image=quay.io/ceph/ceph:v20, name=agitated_solomon, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:08:22 np0005544708 systemd[1]: var-lib-containers-storage-overlay-cb8808b3f871a4539265f66bc53d7bbb252ede060b9af73bba2e0b3b3851f3e8-merged.mount: Deactivated successfully.
Dec  3 16:08:22 np0005544708 podman[76023]: 2025-12-03 21:08:22.220085769 +0000 UTC m=+7.041378413 container remove 6f285a3045bc7e29d550ccf3a12dbf0d925dd7e92be59df31f4277037bc96799 (image=quay.io/ceph/ceph:v20, name=agitated_solomon, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:08:22 np0005544708 systemd[1]: libpod-conmon-6f285a3045bc7e29d550ccf3a12dbf0d925dd7e92be59df31f4277037bc96799.scope: Deactivated successfully.
Dec  3 16:08:22 np0005544708 podman[76185]: 2025-12-03 21:08:22.286667088 +0000 UTC m=+0.044408471 container create aa239cefdcac36c44ad2eb0bb12cb57c4107976138c81ce71dffc76dcbaccfaf (image=quay.io/ceph/ceph:v20, name=bold_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec  3 16:08:22 np0005544708 systemd[1]: Started libpod-conmon-aa239cefdcac36c44ad2eb0bb12cb57c4107976138c81ce71dffc76dcbaccfaf.scope.
Dec  3 16:08:22 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:22 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37dcf0ea3f9e86b0a6c01327bc93b8c8b2b43fd26fd1a8b03c30504f359e6a52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:22 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37dcf0ea3f9e86b0a6c01327bc93b8c8b2b43fd26fd1a8b03c30504f359e6a52/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:22 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37dcf0ea3f9e86b0a6c01327bc93b8c8b2b43fd26fd1a8b03c30504f359e6a52/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:22 np0005544708 podman[76185]: 2025-12-03 21:08:22.360532566 +0000 UTC m=+0.118273969 container init aa239cefdcac36c44ad2eb0bb12cb57c4107976138c81ce71dffc76dcbaccfaf (image=quay.io/ceph/ceph:v20, name=bold_sutherland, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 16:08:22 np0005544708 podman[76185]: 2025-12-03 21:08:22.266393676 +0000 UTC m=+0.024135109 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:08:22 np0005544708 podman[76185]: 2025-12-03 21:08:22.36593231 +0000 UTC m=+0.123673703 container start aa239cefdcac36c44ad2eb0bb12cb57c4107976138c81ce71dffc76dcbaccfaf (image=quay.io/ceph/ceph:v20, name=bold_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  3 16:08:22 np0005544708 podman[76185]: 2025-12-03 21:08:22.369841097 +0000 UTC m=+0.127582500 container attach aa239cefdcac36c44ad2eb0bb12cb57c4107976138c81ce71dffc76dcbaccfaf (image=quay.io/ceph/ceph:v20, name=bold_sutherland, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  3 16:08:22 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:22 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:22 np0005544708 ceph-mon[75204]: Found migration_current of "None". Setting to last migration.
Dec  3 16:08:22 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:22 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:22 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.jxauqt/mirror_snapshot_schedule"} : dispatch
Dec  3 16:08:22 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.jxauqt/trash_purge_schedule"} : dispatch
Dec  3 16:08:22 np0005544708 ceph-mgr[75500]: [cephadm INFO cherrypy.error] [03/Dec/2025:21:08:22] ENGINE Bus STARTING
Dec  3 16:08:22 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : [03/Dec/2025:21:08:22] ENGINE Bus STARTING
Dec  3 16:08:22 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "orchestrator"} v 0)
Dec  3 16:08:22 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2846862938' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Dec  3 16:08:22 np0005544708 ceph-mgr[75500]: [cephadm INFO cherrypy.error] [03/Dec/2025:21:08:22] ENGINE Serving on http://192.168.122.100:8765
Dec  3 16:08:22 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : [03/Dec/2025:21:08:22] ENGINE Serving on http://192.168.122.100:8765
Dec  3 16:08:23 np0005544708 ceph-mgr[75500]: [cephadm INFO cherrypy.error] [03/Dec/2025:21:08:23] ENGINE Serving on https://192.168.122.100:7150
Dec  3 16:08:23 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : [03/Dec/2025:21:08:23] ENGINE Serving on https://192.168.122.100:7150
Dec  3 16:08:23 np0005544708 ceph-mgr[75500]: [cephadm INFO cherrypy.error] [03/Dec/2025:21:08:23] ENGINE Bus STARTED
Dec  3 16:08:23 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : [03/Dec/2025:21:08:23] ENGINE Bus STARTED
Dec  3 16:08:23 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  3 16:08:23 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec  3 16:08:23 np0005544708 ceph-mgr[75500]: [cephadm INFO cherrypy.error] [03/Dec/2025:21:08:23] ENGINE Client ('192.168.122.100', 49586) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  3 16:08:23 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : [03/Dec/2025:21:08:23] ENGINE Client ('192.168.122.100', 49586) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  3 16:08:23 np0005544708 ceph-mgr[75500]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  3 16:08:23 np0005544708 ceph-mon[75204]: [03/Dec/2025:21:08:22] ENGINE Bus STARTING
Dec  3 16:08:23 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/2846862938' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Dec  3 16:08:23 np0005544708 ceph-mon[75204]: [03/Dec/2025:21:08:22] ENGINE Serving on http://192.168.122.100:8765
Dec  3 16:08:23 np0005544708 ceph-mon[75204]: [03/Dec/2025:21:08:23] ENGINE Serving on https://192.168.122.100:7150
Dec  3 16:08:23 np0005544708 ceph-mon[75204]: [03/Dec/2025:21:08:23] ENGINE Bus STARTED
Dec  3 16:08:23 np0005544708 ceph-mon[75204]: [03/Dec/2025:21:08:23] ENGINE Client ('192.168.122.100', 49586) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  3 16:08:23 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2846862938' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Dec  3 16:08:23 np0005544708 bold_sutherland[76201]: module 'orchestrator' is already enabled (always-on)
Dec  3 16:08:23 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.jxauqt(active, since 2s)
Dec  3 16:08:23 np0005544708 ceph-mgr[75500]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  3 16:08:23 np0005544708 systemd[1]: libpod-aa239cefdcac36c44ad2eb0bb12cb57c4107976138c81ce71dffc76dcbaccfaf.scope: Deactivated successfully.
Dec  3 16:08:23 np0005544708 podman[76185]: 2025-12-03 21:08:23.745159563 +0000 UTC m=+1.502901006 container died aa239cefdcac36c44ad2eb0bb12cb57c4107976138c81ce71dffc76dcbaccfaf (image=quay.io/ceph/ceph:v20, name=bold_sutherland, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec  3 16:08:23 np0005544708 systemd[1]: var-lib-containers-storage-overlay-37dcf0ea3f9e86b0a6c01327bc93b8c8b2b43fd26fd1a8b03c30504f359e6a52-merged.mount: Deactivated successfully.
Dec  3 16:08:23 np0005544708 podman[76185]: 2025-12-03 21:08:23.792100375 +0000 UTC m=+1.549841768 container remove aa239cefdcac36c44ad2eb0bb12cb57c4107976138c81ce71dffc76dcbaccfaf (image=quay.io/ceph/ceph:v20, name=bold_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec  3 16:08:23 np0005544708 systemd[1]: libpod-conmon-aa239cefdcac36c44ad2eb0bb12cb57c4107976138c81ce71dffc76dcbaccfaf.scope: Deactivated successfully.
Dec  3 16:08:23 np0005544708 podman[76262]: 2025-12-03 21:08:23.856414647 +0000 UTC m=+0.047238811 container create bad730a94e0e312c7595b53e179b264cf6735213a5a13ba41e4e0c74fb2af779 (image=quay.io/ceph/ceph:v20, name=optimistic_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:08:23 np0005544708 systemd[1]: Started libpod-conmon-bad730a94e0e312c7595b53e179b264cf6735213a5a13ba41e4e0c74fb2af779.scope.
Dec  3 16:08:23 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:23 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17882cb9eef892dc18564498f212051334045edeee853f24a3caf07db2f3f814/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:23 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17882cb9eef892dc18564498f212051334045edeee853f24a3caf07db2f3f814/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:23 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17882cb9eef892dc18564498f212051334045edeee853f24a3caf07db2f3f814/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:23 np0005544708 podman[76262]: 2025-12-03 21:08:23.832652349 +0000 UTC m=+0.023476593 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:08:23 np0005544708 podman[76262]: 2025-12-03 21:08:23.932584363 +0000 UTC m=+0.123408547 container init bad730a94e0e312c7595b53e179b264cf6735213a5a13ba41e4e0c74fb2af779 (image=quay.io/ceph/ceph:v20, name=optimistic_chebyshev, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  3 16:08:23 np0005544708 podman[76262]: 2025-12-03 21:08:23.950704611 +0000 UTC m=+0.141528775 container start bad730a94e0e312c7595b53e179b264cf6735213a5a13ba41e4e0c74fb2af779 (image=quay.io/ceph/ceph:v20, name=optimistic_chebyshev, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:08:23 np0005544708 podman[76262]: 2025-12-03 21:08:23.955044049 +0000 UTC m=+0.145868233 container attach bad730a94e0e312c7595b53e179b264cf6735213a5a13ba41e4e0c74fb2af779 (image=quay.io/ceph/ceph:v20, name=optimistic_chebyshev, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:08:24 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:08:24 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Dec  3 16:08:24 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:24 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  3 16:08:24 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec  3 16:08:24 np0005544708 systemd[1]: libpod-bad730a94e0e312c7595b53e179b264cf6735213a5a13ba41e4e0c74fb2af779.scope: Deactivated successfully.
Dec  3 16:08:24 np0005544708 podman[76262]: 2025-12-03 21:08:24.440983109 +0000 UTC m=+0.631807293 container died bad730a94e0e312c7595b53e179b264cf6735213a5a13ba41e4e0c74fb2af779 (image=quay.io/ceph/ceph:v20, name=optimistic_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec  3 16:08:24 np0005544708 systemd[1]: var-lib-containers-storage-overlay-17882cb9eef892dc18564498f212051334045edeee853f24a3caf07db2f3f814-merged.mount: Deactivated successfully.
Dec  3 16:08:24 np0005544708 podman[76262]: 2025-12-03 21:08:24.502892521 +0000 UTC m=+0.693716695 container remove bad730a94e0e312c7595b53e179b264cf6735213a5a13ba41e4e0c74fb2af779 (image=quay.io/ceph/ceph:v20, name=optimistic_chebyshev, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:08:24 np0005544708 systemd[1]: libpod-conmon-bad730a94e0e312c7595b53e179b264cf6735213a5a13ba41e4e0c74fb2af779.scope: Deactivated successfully.
Dec  3 16:08:24 np0005544708 podman[76317]: 2025-12-03 21:08:24.556922208 +0000 UTC m=+0.037386326 container create 20cc8fd3273905d87093de465b056bcd5e80f4ac687f9085365d5b5c8c8377df (image=quay.io/ceph/ceph:v20, name=pensive_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True)
Dec  3 16:08:24 np0005544708 systemd[1]: Started libpod-conmon-20cc8fd3273905d87093de465b056bcd5e80f4ac687f9085365d5b5c8c8377df.scope.
Dec  3 16:08:24 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:24 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/821a196636743973932c9101c237b7efee32866e7fb11369e83d932fa291dd43/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:24 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/821a196636743973932c9101c237b7efee32866e7fb11369e83d932fa291dd43/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:24 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/821a196636743973932c9101c237b7efee32866e7fb11369e83d932fa291dd43/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:24 np0005544708 podman[76317]: 2025-12-03 21:08:24.634720524 +0000 UTC m=+0.115184712 container init 20cc8fd3273905d87093de465b056bcd5e80f4ac687f9085365d5b5c8c8377df (image=quay.io/ceph/ceph:v20, name=pensive_antonelli, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:08:24 np0005544708 podman[76317]: 2025-12-03 21:08:24.539244421 +0000 UTC m=+0.019708559 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:08:24 np0005544708 podman[76317]: 2025-12-03 21:08:24.641361499 +0000 UTC m=+0.121825637 container start 20cc8fd3273905d87093de465b056bcd5e80f4ac687f9085365d5b5c8c8377df (image=quay.io/ceph/ceph:v20, name=pensive_antonelli, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec  3 16:08:24 np0005544708 podman[76317]: 2025-12-03 21:08:24.644914347 +0000 UTC m=+0.125378545 container attach 20cc8fd3273905d87093de465b056bcd5e80f4ac687f9085365d5b5c8c8377df (image=quay.io/ceph/ceph:v20, name=pensive_antonelli, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec  3 16:08:24 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/2846862938' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Dec  3 16:08:24 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:25 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:08:25 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Dec  3 16:08:25 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:25 np0005544708 ceph-mgr[75500]: [cephadm INFO root] Set ssh ssh_user
Dec  3 16:08:25 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Dec  3 16:08:25 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Dec  3 16:08:25 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:25 np0005544708 ceph-mgr[75500]: [cephadm INFO root] Set ssh ssh_config
Dec  3 16:08:25 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Dec  3 16:08:25 np0005544708 ceph-mgr[75500]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Dec  3 16:08:25 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Dec  3 16:08:25 np0005544708 pensive_antonelli[76334]: ssh user set to ceph-admin. sudo will be used
Dec  3 16:08:25 np0005544708 systemd[1]: libpod-20cc8fd3273905d87093de465b056bcd5e80f4ac687f9085365d5b5c8c8377df.scope: Deactivated successfully.
Dec  3 16:08:25 np0005544708 podman[76317]: 2025-12-03 21:08:25.072169353 +0000 UTC m=+0.552633491 container died 20cc8fd3273905d87093de465b056bcd5e80f4ac687f9085365d5b5c8c8377df (image=quay.io/ceph/ceph:v20, name=pensive_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec  3 16:08:25 np0005544708 systemd[1]: var-lib-containers-storage-overlay-821a196636743973932c9101c237b7efee32866e7fb11369e83d932fa291dd43-merged.mount: Deactivated successfully.
Dec  3 16:08:25 np0005544708 podman[76317]: 2025-12-03 21:08:25.125473913 +0000 UTC m=+0.605938061 container remove 20cc8fd3273905d87093de465b056bcd5e80f4ac687f9085365d5b5c8c8377df (image=quay.io/ceph/ceph:v20, name=pensive_antonelli, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec  3 16:08:25 np0005544708 ceph-mgr[75500]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  3 16:08:25 np0005544708 systemd[1]: libpod-conmon-20cc8fd3273905d87093de465b056bcd5e80f4ac687f9085365d5b5c8c8377df.scope: Deactivated successfully.
Dec  3 16:08:25 np0005544708 podman[76374]: 2025-12-03 21:08:25.19765757 +0000 UTC m=+0.055224748 container create d1d3ca8cb9029d698fb4afd45382ff8590ce15333c8212c069b2c8b30f0c413f (image=quay.io/ceph/ceph:v20, name=funny_darwin, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:08:25 np0005544708 systemd[1]: Started libpod-conmon-d1d3ca8cb9029d698fb4afd45382ff8590ce15333c8212c069b2c8b30f0c413f.scope.
Dec  3 16:08:25 np0005544708 podman[76374]: 2025-12-03 21:08:25.168061598 +0000 UTC m=+0.025628846 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:08:25 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:25 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28f1fa760d7c2cb668456c0ce4d737f4a614b9629b7dcc31063dceea44d190ac/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:25 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28f1fa760d7c2cb668456c0ce4d737f4a614b9629b7dcc31063dceea44d190ac/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:25 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28f1fa760d7c2cb668456c0ce4d737f4a614b9629b7dcc31063dceea44d190ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:25 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28f1fa760d7c2cb668456c0ce4d737f4a614b9629b7dcc31063dceea44d190ac/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:25 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28f1fa760d7c2cb668456c0ce4d737f4a614b9629b7dcc31063dceea44d190ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:25 np0005544708 podman[76374]: 2025-12-03 21:08:25.298660191 +0000 UTC m=+0.156227359 container init d1d3ca8cb9029d698fb4afd45382ff8590ce15333c8212c069b2c8b30f0c413f (image=quay.io/ceph/ceph:v20, name=funny_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  3 16:08:25 np0005544708 podman[76374]: 2025-12-03 21:08:25.310780421 +0000 UTC m=+0.168347599 container start d1d3ca8cb9029d698fb4afd45382ff8590ce15333c8212c069b2c8b30f0c413f (image=quay.io/ceph/ceph:v20, name=funny_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:08:25 np0005544708 podman[76374]: 2025-12-03 21:08:25.314874002 +0000 UTC m=+0.172441180 container attach d1d3ca8cb9029d698fb4afd45382ff8590ce15333c8212c069b2c8b30f0c413f (image=quay.io/ceph/ceph:v20, name=funny_darwin, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 16:08:25 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:08:25 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Dec  3 16:08:25 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:25 np0005544708 ceph-mgr[75500]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  3 16:08:25 np0005544708 ceph-mgr[75500]: [cephadm INFO root] Set ssh ssh_identity_key
Dec  3 16:08:25 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Dec  3 16:08:25 np0005544708 ceph-mgr[75500]: [cephadm INFO root] Set ssh private key
Dec  3 16:08:25 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : Set ssh private key
Dec  3 16:08:25 np0005544708 systemd[1]: libpod-d1d3ca8cb9029d698fb4afd45382ff8590ce15333c8212c069b2c8b30f0c413f.scope: Deactivated successfully.
Dec  3 16:08:25 np0005544708 podman[76374]: 2025-12-03 21:08:25.761080868 +0000 UTC m=+0.618648046 container died d1d3ca8cb9029d698fb4afd45382ff8590ce15333c8212c069b2c8b30f0c413f (image=quay.io/ceph/ceph:v20, name=funny_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:08:25 np0005544708 systemd[1]: var-lib-containers-storage-overlay-28f1fa760d7c2cb668456c0ce4d737f4a614b9629b7dcc31063dceea44d190ac-merged.mount: Deactivated successfully.
Dec  3 16:08:25 np0005544708 podman[76374]: 2025-12-03 21:08:25.798475754 +0000 UTC m=+0.656042902 container remove d1d3ca8cb9029d698fb4afd45382ff8590ce15333c8212c069b2c8b30f0c413f (image=quay.io/ceph/ceph:v20, name=funny_darwin, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:08:25 np0005544708 systemd[1]: libpod-conmon-d1d3ca8cb9029d698fb4afd45382ff8590ce15333c8212c069b2c8b30f0c413f.scope: Deactivated successfully.
Dec  3 16:08:25 np0005544708 podman[76428]: 2025-12-03 21:08:25.874677871 +0000 UTC m=+0.057330821 container create 3064e6256545be95e5d1be40fb3b18af89a2666f9272a678eb6b3a81ae2f4827 (image=quay.io/ceph/ceph:v20, name=gallant_bhabha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:08:25 np0005544708 systemd[1]: Started libpod-conmon-3064e6256545be95e5d1be40fb3b18af89a2666f9272a678eb6b3a81ae2f4827.scope.
Dec  3 16:08:25 np0005544708 podman[76428]: 2025-12-03 21:08:25.842801851 +0000 UTC m=+0.025454861 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:08:25 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:25 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6071a846085fa60d580b71e5dcb1cedd246ac315379a174eea184c5710763a72/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:25 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6071a846085fa60d580b71e5dcb1cedd246ac315379a174eea184c5710763a72/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:25 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6071a846085fa60d580b71e5dcb1cedd246ac315379a174eea184c5710763a72/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:25 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6071a846085fa60d580b71e5dcb1cedd246ac315379a174eea184c5710763a72/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:25 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6071a846085fa60d580b71e5dcb1cedd246ac315379a174eea184c5710763a72/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:25 np0005544708 podman[76428]: 2025-12-03 21:08:25.971180989 +0000 UTC m=+0.153833969 container init 3064e6256545be95e5d1be40fb3b18af89a2666f9272a678eb6b3a81ae2f4827 (image=quay.io/ceph/ceph:v20, name=gallant_bhabha, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  3 16:08:25 np0005544708 podman[76428]: 2025-12-03 21:08:25.989832121 +0000 UTC m=+0.172485041 container start 3064e6256545be95e5d1be40fb3b18af89a2666f9272a678eb6b3a81ae2f4827 (image=quay.io/ceph/ceph:v20, name=gallant_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:08:25 np0005544708 podman[76428]: 2025-12-03 21:08:25.993801389 +0000 UTC m=+0.176454349 container attach 3064e6256545be95e5d1be40fb3b18af89a2666f9272a678eb6b3a81ae2f4827 (image=quay.io/ceph/ceph:v20, name=gallant_bhabha, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec  3 16:08:26 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:26 np0005544708 ceph-mon[75204]: Set ssh ssh_user
Dec  3 16:08:26 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:26 np0005544708 ceph-mon[75204]: Set ssh ssh_config
Dec  3 16:08:26 np0005544708 ceph-mon[75204]: ssh user set to ceph-admin. sudo will be used
Dec  3 16:08:26 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:26 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:08:26 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Dec  3 16:08:26 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:26 np0005544708 ceph-mgr[75500]: [cephadm INFO root] Set ssh ssh_identity_pub
Dec  3 16:08:26 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Dec  3 16:08:26 np0005544708 systemd[1]: libpod-3064e6256545be95e5d1be40fb3b18af89a2666f9272a678eb6b3a81ae2f4827.scope: Deactivated successfully.
Dec  3 16:08:26 np0005544708 podman[76428]: 2025-12-03 21:08:26.405780998 +0000 UTC m=+0.588433928 container died 3064e6256545be95e5d1be40fb3b18af89a2666f9272a678eb6b3a81ae2f4827 (image=quay.io/ceph/ceph:v20, name=gallant_bhabha, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec  3 16:08:26 np0005544708 systemd[1]: var-lib-containers-storage-overlay-6071a846085fa60d580b71e5dcb1cedd246ac315379a174eea184c5710763a72-merged.mount: Deactivated successfully.
Dec  3 16:08:26 np0005544708 podman[76428]: 2025-12-03 21:08:26.443741738 +0000 UTC m=+0.626394658 container remove 3064e6256545be95e5d1be40fb3b18af89a2666f9272a678eb6b3a81ae2f4827 (image=quay.io/ceph/ceph:v20, name=gallant_bhabha, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec  3 16:08:26 np0005544708 systemd[1]: libpod-conmon-3064e6256545be95e5d1be40fb3b18af89a2666f9272a678eb6b3a81ae2f4827.scope: Deactivated successfully.
Dec  3 16:08:26 np0005544708 podman[76482]: 2025-12-03 21:08:26.533189372 +0000 UTC m=+0.062939049 container create 74293bb479b2dff6581cb68b03a3b89244feead0649162afe32fd606672f9c61 (image=quay.io/ceph/ceph:v20, name=wizardly_elbakyan, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 16:08:26 np0005544708 systemd[1]: Started libpod-conmon-74293bb479b2dff6581cb68b03a3b89244feead0649162afe32fd606672f9c61.scope.
Dec  3 16:08:26 np0005544708 podman[76482]: 2025-12-03 21:08:26.506835129 +0000 UTC m=+0.036584876 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:08:26 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:26 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/700f99c3eb0ccc68daf51ef90e2b44453ac89cf0a2db6b7da8303fa65140866f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:26 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/700f99c3eb0ccc68daf51ef90e2b44453ac89cf0a2db6b7da8303fa65140866f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:26 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/700f99c3eb0ccc68daf51ef90e2b44453ac89cf0a2db6b7da8303fa65140866f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:26 np0005544708 podman[76482]: 2025-12-03 21:08:26.631203139 +0000 UTC m=+0.160952866 container init 74293bb479b2dff6581cb68b03a3b89244feead0649162afe32fd606672f9c61 (image=quay.io/ceph/ceph:v20, name=wizardly_elbakyan, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True)
Dec  3 16:08:26 np0005544708 podman[76482]: 2025-12-03 21:08:26.64055385 +0000 UTC m=+0.170303557 container start 74293bb479b2dff6581cb68b03a3b89244feead0649162afe32fd606672f9c61 (image=quay.io/ceph/ceph:v20, name=wizardly_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:08:26 np0005544708 podman[76482]: 2025-12-03 21:08:26.644239962 +0000 UTC m=+0.173989659 container attach 74293bb479b2dff6581cb68b03a3b89244feead0649162afe32fd606672f9c61 (image=quay.io/ceph/ceph:v20, name=wizardly_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:08:26 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052782 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:08:27 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:08:27 np0005544708 wizardly_elbakyan[76500]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCmXlK5y3Q4gaxv6M50V21Q5BxfeScA1efDlUPFJQx+7fjDW9LbjybJnwuUiZddHS7AZQRDwmbSKKxujrM5O/RgOcUubf//z1FleN0ZLzMN2Kr2gR59aLCX7I6nP+kfOWwDLCmPnnlx2ep27ttsyPvFt+E6LeBlVRn9DnMTfpoiiTjepXC8sLt6ogfOug/YhSPG6VZt2HY+eLupSup1SvQ+fP/YIzuAXPUwfRP9rehCj247OHEahfKtxzK+7b222mYPUvFVhbsVfq9ZMr2iPqRln/w4MSWo3GMwwEWZB5cNsS0qJYM9Hr5wrTQN22SOhd/BssM3SzThpbExCkObrut812OwlOJn80SkhESE0NxdQWW1tJOXVufFebcyMMrqpG9eEQvBVnVM/jkEw5epe8tMa5K8J1RPZ9xezSTHCxcgMv8ma+AxmKOAh8Dl6sBHWchaRvllX9UgXJFPcgPD2C8sEGcsJyEEyaToEr5gqYVkE8O0HpUZvOi5w8AlXNxUt0k= zuul@controller
Dec  3 16:08:27 np0005544708 systemd[1]: libpod-74293bb479b2dff6581cb68b03a3b89244feead0649162afe32fd606672f9c61.scope: Deactivated successfully.
Dec  3 16:08:27 np0005544708 podman[76482]: 2025-12-03 21:08:27.103152522 +0000 UTC m=+0.632902179 container died 74293bb479b2dff6581cb68b03a3b89244feead0649162afe32fd606672f9c61 (image=quay.io/ceph/ceph:v20, name=wizardly_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:08:27 np0005544708 systemd[1]: var-lib-containers-storage-overlay-700f99c3eb0ccc68daf51ef90e2b44453ac89cf0a2db6b7da8303fa65140866f-merged.mount: Deactivated successfully.
Dec  3 16:08:27 np0005544708 ceph-mgr[75500]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  3 16:08:27 np0005544708 podman[76482]: 2025-12-03 21:08:27.13985689 +0000 UTC m=+0.669606557 container remove 74293bb479b2dff6581cb68b03a3b89244feead0649162afe32fd606672f9c61 (image=quay.io/ceph/ceph:v20, name=wizardly_elbakyan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  3 16:08:27 np0005544708 systemd[1]: libpod-conmon-74293bb479b2dff6581cb68b03a3b89244feead0649162afe32fd606672f9c61.scope: Deactivated successfully.
Dec  3 16:08:27 np0005544708 podman[76537]: 2025-12-03 21:08:27.210948021 +0000 UTC m=+0.048957684 container create 8a65f8d31b686b4902fdfda5464e97e7eef2b3bc863de964c4901dca9115be18 (image=quay.io/ceph/ceph:v20, name=vigorous_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  3 16:08:27 np0005544708 systemd[1]: Started libpod-conmon-8a65f8d31b686b4902fdfda5464e97e7eef2b3bc863de964c4901dca9115be18.scope.
Dec  3 16:08:27 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:27 np0005544708 podman[76537]: 2025-12-03 21:08:27.186038094 +0000 UTC m=+0.024047857 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:08:27 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fa870aef962dfd4c92a92bc1b3bb8c16f2ab8d9afe63b354477e89eeef141f1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:27 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fa870aef962dfd4c92a92bc1b3bb8c16f2ab8d9afe63b354477e89eeef141f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:27 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fa870aef962dfd4c92a92bc1b3bb8c16f2ab8d9afe63b354477e89eeef141f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:27 np0005544708 podman[76537]: 2025-12-03 21:08:27.293141495 +0000 UTC m=+0.131151168 container init 8a65f8d31b686b4902fdfda5464e97e7eef2b3bc863de964c4901dca9115be18 (image=quay.io/ceph/ceph:v20, name=vigorous_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:08:27 np0005544708 podman[76537]: 2025-12-03 21:08:27.30184039 +0000 UTC m=+0.139850063 container start 8a65f8d31b686b4902fdfda5464e97e7eef2b3bc863de964c4901dca9115be18 (image=quay.io/ceph/ceph:v20, name=vigorous_buck, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:08:27 np0005544708 podman[76537]: 2025-12-03 21:08:27.307443079 +0000 UTC m=+0.145452782 container attach 8a65f8d31b686b4902fdfda5464e97e7eef2b3bc863de964c4901dca9115be18 (image=quay.io/ceph/ceph:v20, name=vigorous_buck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  3 16:08:27 np0005544708 ceph-mon[75204]: Set ssh ssh_identity_key
Dec  3 16:08:27 np0005544708 ceph-mon[75204]: Set ssh private key
Dec  3 16:08:27 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:27 np0005544708 ceph-mon[75204]: Set ssh ssh_identity_pub
Dec  3 16:08:27 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:08:27 np0005544708 ceph-mgr[75500]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  3 16:08:27 np0005544708 systemd-logind[787]: New session 20 of user ceph-admin.
Dec  3 16:08:28 np0005544708 systemd[1]: Created slice User Slice of UID 42477.
Dec  3 16:08:28 np0005544708 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec  3 16:08:28 np0005544708 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec  3 16:08:28 np0005544708 systemd[1]: Starting User Manager for UID 42477...
Dec  3 16:08:28 np0005544708 systemd[76584]: Queued start job for default target Main User Target.
Dec  3 16:08:28 np0005544708 systemd-logind[787]: New session 22 of user ceph-admin.
Dec  3 16:08:28 np0005544708 systemd[76584]: Created slice User Application Slice.
Dec  3 16:08:28 np0005544708 systemd[76584]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  3 16:08:28 np0005544708 systemd[76584]: Started Daily Cleanup of User's Temporary Directories.
Dec  3 16:08:28 np0005544708 systemd[76584]: Reached target Paths.
Dec  3 16:08:28 np0005544708 systemd[76584]: Reached target Timers.
Dec  3 16:08:28 np0005544708 systemd[76584]: Starting D-Bus User Message Bus Socket...
Dec  3 16:08:28 np0005544708 systemd[76584]: Starting Create User's Volatile Files and Directories...
Dec  3 16:08:28 np0005544708 systemd[76584]: Finished Create User's Volatile Files and Directories.
Dec  3 16:08:28 np0005544708 systemd[76584]: Listening on D-Bus User Message Bus Socket.
Dec  3 16:08:28 np0005544708 systemd[76584]: Reached target Sockets.
Dec  3 16:08:28 np0005544708 systemd[76584]: Reached target Basic System.
Dec  3 16:08:28 np0005544708 systemd[76584]: Reached target Main User Target.
Dec  3 16:08:28 np0005544708 systemd[76584]: Startup finished in 168ms.
Dec  3 16:08:28 np0005544708 systemd[1]: Started User Manager for UID 42477.
Dec  3 16:08:28 np0005544708 systemd[1]: Started Session 20 of User ceph-admin.
Dec  3 16:08:28 np0005544708 systemd[1]: Started Session 22 of User ceph-admin.
Dec  3 16:08:28 np0005544708 systemd-logind[787]: New session 23 of user ceph-admin.
Dec  3 16:08:28 np0005544708 systemd[1]: Started Session 23 of User ceph-admin.
Dec  3 16:08:28 np0005544708 systemd-logind[787]: New session 24 of user ceph-admin.
Dec  3 16:08:28 np0005544708 systemd[1]: Started Session 24 of User ceph-admin.
Dec  3 16:08:29 np0005544708 ceph-mgr[75500]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Dec  3 16:08:29 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Dec  3 16:08:29 np0005544708 ceph-mgr[75500]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  3 16:08:29 np0005544708 systemd-logind[787]: New session 25 of user ceph-admin.
Dec  3 16:08:29 np0005544708 systemd[1]: Started Session 25 of User ceph-admin.
Dec  3 16:08:29 np0005544708 systemd-logind[787]: New session 26 of user ceph-admin.
Dec  3 16:08:29 np0005544708 systemd[1]: Started Session 26 of User ceph-admin.
Dec  3 16:08:29 np0005544708 ceph-mgr[75500]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  3 16:08:30 np0005544708 systemd-logind[787]: New session 27 of user ceph-admin.
Dec  3 16:08:30 np0005544708 systemd[1]: Started Session 27 of User ceph-admin.
Dec  3 16:08:30 np0005544708 ceph-mon[75204]: Deploying cephadm binary to compute-0
Dec  3 16:08:30 np0005544708 systemd-logind[787]: New session 28 of user ceph-admin.
Dec  3 16:08:30 np0005544708 systemd[1]: Started Session 28 of User ceph-admin.
Dec  3 16:08:30 np0005544708 systemd-logind[787]: New session 29 of user ceph-admin.
Dec  3 16:08:30 np0005544708 systemd[1]: Started Session 29 of User ceph-admin.
Dec  3 16:08:31 np0005544708 ceph-mgr[75500]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  3 16:08:31 np0005544708 systemd-logind[787]: New session 30 of user ceph-admin.
Dec  3 16:08:31 np0005544708 systemd[1]: Started Session 30 of User ceph-admin.
Dec  3 16:08:31 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054704 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:08:31 np0005544708 ceph-mgr[75500]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  3 16:08:32 np0005544708 systemd-logind[787]: New session 31 of user ceph-admin.
Dec  3 16:08:32 np0005544708 systemd[1]: Started Session 31 of User ceph-admin.
Dec  3 16:08:33 np0005544708 systemd-logind[787]: New session 32 of user ceph-admin.
Dec  3 16:08:33 np0005544708 systemd[1]: Started Session 32 of User ceph-admin.
Dec  3 16:08:33 np0005544708 ceph-mgr[75500]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  3 16:08:33 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  3 16:08:33 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:33 np0005544708 ceph-mgr[75500]: [cephadm INFO root] Added host compute-0
Dec  3 16:08:33 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : Added host compute-0
Dec  3 16:08:33 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  3 16:08:33 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec  3 16:08:33 np0005544708 vigorous_buck[76554]: Added host 'compute-0' with addr '192.168.122.100'
Dec  3 16:08:33 np0005544708 systemd[1]: libpod-8a65f8d31b686b4902fdfda5464e97e7eef2b3bc863de964c4901dca9115be18.scope: Deactivated successfully.
Dec  3 16:08:33 np0005544708 podman[76537]: 2025-12-03 21:08:33.608122145 +0000 UTC m=+6.446131888 container died 8a65f8d31b686b4902fdfda5464e97e7eef2b3bc863de964c4901dca9115be18 (image=quay.io/ceph/ceph:v20, name=vigorous_buck, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  3 16:08:33 np0005544708 systemd[1]: var-lib-containers-storage-overlay-5fa870aef962dfd4c92a92bc1b3bb8c16f2ab8d9afe63b354477e89eeef141f1-merged.mount: Deactivated successfully.
Dec  3 16:08:33 np0005544708 podman[76537]: 2025-12-03 21:08:33.672426917 +0000 UTC m=+6.510436600 container remove 8a65f8d31b686b4902fdfda5464e97e7eef2b3bc863de964c4901dca9115be18 (image=quay.io/ceph/ceph:v20, name=vigorous_buck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:08:33 np0005544708 systemd[1]: libpod-conmon-8a65f8d31b686b4902fdfda5464e97e7eef2b3bc863de964c4901dca9115be18.scope: Deactivated successfully.
Dec  3 16:08:33 np0005544708 podman[76977]: 2025-12-03 21:08:33.749395023 +0000 UTC m=+0.052448619 container create 2e83b089b30b51a5b6596a2ee6c239c0c9e2e75ccd9ff2d4822a36750419af04 (image=quay.io/ceph/ceph:v20, name=great_antonelli, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Dec  3 16:08:33 np0005544708 ceph-mgr[75500]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  3 16:08:33 np0005544708 systemd[1]: Started libpod-conmon-2e83b089b30b51a5b6596a2ee6c239c0c9e2e75ccd9ff2d4822a36750419af04.scope.
Dec  3 16:08:33 np0005544708 podman[76977]: 2025-12-03 21:08:33.731461689 +0000 UTC m=+0.034515315 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:08:33 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:33 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1f7741a6dda40c1f59190b99dc3f6fa85b43067a9795a6511e0e8def865249f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:33 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1f7741a6dda40c1f59190b99dc3f6fa85b43067a9795a6511e0e8def865249f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:33 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1f7741a6dda40c1f59190b99dc3f6fa85b43067a9795a6511e0e8def865249f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:33 np0005544708 podman[76977]: 2025-12-03 21:08:33.855253464 +0000 UTC m=+0.158307090 container init 2e83b089b30b51a5b6596a2ee6c239c0c9e2e75ccd9ff2d4822a36750419af04 (image=quay.io/ceph/ceph:v20, name=great_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:08:33 np0005544708 podman[76977]: 2025-12-03 21:08:33.867894846 +0000 UTC m=+0.170948472 container start 2e83b089b30b51a5b6596a2ee6c239c0c9e2e75ccd9ff2d4822a36750419af04 (image=quay.io/ceph/ceph:v20, name=great_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:08:33 np0005544708 podman[76977]: 2025-12-03 21:08:33.87244731 +0000 UTC m=+0.175501016 container attach 2e83b089b30b51a5b6596a2ee6c239c0c9e2e75ccd9ff2d4822a36750419af04 (image=quay.io/ceph/ceph:v20, name=great_antonelli, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec  3 16:08:34 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:08:34 np0005544708 ceph-mgr[75500]: [cephadm INFO root] Saving service mon spec with placement count:5
Dec  3 16:08:34 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Dec  3 16:08:34 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec  3 16:08:34 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:34 np0005544708 great_antonelli[77018]: Scheduled mon update...
Dec  3 16:08:34 np0005544708 systemd[1]: libpod-2e83b089b30b51a5b6596a2ee6c239c0c9e2e75ccd9ff2d4822a36750419af04.scope: Deactivated successfully.
Dec  3 16:08:34 np0005544708 podman[77069]: 2025-12-03 21:08:34.392612246 +0000 UTC m=+0.023008701 container died 2e83b089b30b51a5b6596a2ee6c239c0c9e2e75ccd9ff2d4822a36750419af04 (image=quay.io/ceph/ceph:v20, name=great_antonelli, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:08:34 np0005544708 systemd[1]: var-lib-containers-storage-overlay-e1f7741a6dda40c1f59190b99dc3f6fa85b43067a9795a6511e0e8def865249f-merged.mount: Deactivated successfully.
Dec  3 16:08:34 np0005544708 podman[77069]: 2025-12-03 21:08:34.427058759 +0000 UTC m=+0.057455194 container remove 2e83b089b30b51a5b6596a2ee6c239c0c9e2e75ccd9ff2d4822a36750419af04 (image=quay.io/ceph/ceph:v20, name=great_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 16:08:34 np0005544708 systemd[1]: libpod-conmon-2e83b089b30b51a5b6596a2ee6c239c0c9e2e75ccd9ff2d4822a36750419af04.scope: Deactivated successfully.
Dec  3 16:08:34 np0005544708 podman[77084]: 2025-12-03 21:08:34.511583072 +0000 UTC m=+0.055312691 container create e7d408957c2bd91ba9870cb546b3a163074fd8e41e656bbbdb6603eb7f0c5473 (image=quay.io/ceph/ceph:v20, name=wizardly_yalow, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  3 16:08:34 np0005544708 systemd[1]: Started libpod-conmon-e7d408957c2bd91ba9870cb546b3a163074fd8e41e656bbbdb6603eb7f0c5473.scope.
Dec  3 16:08:34 np0005544708 podman[77053]: 2025-12-03 21:08:34.56041186 +0000 UTC m=+0.520572748 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:08:34 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:34 np0005544708 ceph-mon[75204]: Added host compute-0
Dec  3 16:08:34 np0005544708 ceph-mon[75204]: Saving service mon spec with placement count:5
Dec  3 16:08:34 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:34 np0005544708 podman[77084]: 2025-12-03 21:08:34.485608859 +0000 UTC m=+0.029338588 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:08:34 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:34 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65387f9127fbc680e5ce1e823b0c634bbd8781b9a38a6f5d6d5c0c47a30512ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:34 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65387f9127fbc680e5ce1e823b0c634bbd8781b9a38a6f5d6d5c0c47a30512ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:34 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65387f9127fbc680e5ce1e823b0c634bbd8781b9a38a6f5d6d5c0c47a30512ec/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:34 np0005544708 podman[77084]: 2025-12-03 21:08:34.607373133 +0000 UTC m=+0.151102842 container init e7d408957c2bd91ba9870cb546b3a163074fd8e41e656bbbdb6603eb7f0c5473 (image=quay.io/ceph/ceph:v20, name=wizardly_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec  3 16:08:34 np0005544708 podman[77084]: 2025-12-03 21:08:34.617779811 +0000 UTC m=+0.161509430 container start e7d408957c2bd91ba9870cb546b3a163074fd8e41e656bbbdb6603eb7f0c5473 (image=quay.io/ceph/ceph:v20, name=wizardly_yalow, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:08:34 np0005544708 podman[77084]: 2025-12-03 21:08:34.62221651 +0000 UTC m=+0.165946169 container attach e7d408957c2bd91ba9870cb546b3a163074fd8e41e656bbbdb6603eb7f0c5473 (image=quay.io/ceph/ceph:v20, name=wizardly_yalow, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 16:08:34 np0005544708 podman[77116]: 2025-12-03 21:08:34.697872773 +0000 UTC m=+0.046721857 container create 0deee56b314367d338cf0346100e9219a0a47b3e8930a9340ebf59147cb7aa1b (image=quay.io/ceph/ceph:v20, name=strange_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec  3 16:08:34 np0005544708 systemd[1]: Started libpod-conmon-0deee56b314367d338cf0346100e9219a0a47b3e8930a9340ebf59147cb7aa1b.scope.
Dec  3 16:08:34 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:34 np0005544708 podman[77116]: 2025-12-03 21:08:34.679503489 +0000 UTC m=+0.028352593 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:08:34 np0005544708 podman[77116]: 2025-12-03 21:08:34.782446557 +0000 UTC m=+0.131295681 container init 0deee56b314367d338cf0346100e9219a0a47b3e8930a9340ebf59147cb7aa1b (image=quay.io/ceph/ceph:v20, name=strange_lovelace, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 16:08:34 np0005544708 podman[77116]: 2025-12-03 21:08:34.787163573 +0000 UTC m=+0.136012657 container start 0deee56b314367d338cf0346100e9219a0a47b3e8930a9340ebf59147cb7aa1b (image=quay.io/ceph/ceph:v20, name=strange_lovelace, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  3 16:08:34 np0005544708 podman[77116]: 2025-12-03 21:08:34.791018109 +0000 UTC m=+0.139867243 container attach 0deee56b314367d338cf0346100e9219a0a47b3e8930a9340ebf59147cb7aa1b (image=quay.io/ceph/ceph:v20, name=strange_lovelace, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030)
Dec  3 16:08:34 np0005544708 strange_lovelace[77140]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Dec  3 16:08:34 np0005544708 systemd[1]: libpod-0deee56b314367d338cf0346100e9219a0a47b3e8930a9340ebf59147cb7aa1b.scope: Deactivated successfully.
Dec  3 16:08:34 np0005544708 podman[77116]: 2025-12-03 21:08:34.877350836 +0000 UTC m=+0.226199950 container died 0deee56b314367d338cf0346100e9219a0a47b3e8930a9340ebf59147cb7aa1b (image=quay.io/ceph/ceph:v20, name=strange_lovelace, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec  3 16:08:34 np0005544708 systemd[1]: var-lib-containers-storage-overlay-0d403c9de6e192ef079128a729c2ae483103031555dcfbecb9d8ece07938e6e3-merged.mount: Deactivated successfully.
Dec  3 16:08:34 np0005544708 podman[77116]: 2025-12-03 21:08:34.939796082 +0000 UTC m=+0.288645186 container remove 0deee56b314367d338cf0346100e9219a0a47b3e8930a9340ebf59147cb7aa1b (image=quay.io/ceph/ceph:v20, name=strange_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Dec  3 16:08:34 np0005544708 systemd[1]: libpod-conmon-0deee56b314367d338cf0346100e9219a0a47b3e8930a9340ebf59147cb7aa1b.scope: Deactivated successfully.
Dec  3 16:08:34 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Dec  3 16:08:34 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:35 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:08:35 np0005544708 ceph-mgr[75500]: [cephadm INFO root] Saving service mgr spec with placement count:2
Dec  3 16:08:35 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Dec  3 16:08:35 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  3 16:08:35 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:35 np0005544708 wizardly_yalow[77100]: Scheduled mgr update...
Dec  3 16:08:35 np0005544708 systemd[1]: libpod-e7d408957c2bd91ba9870cb546b3a163074fd8e41e656bbbdb6603eb7f0c5473.scope: Deactivated successfully.
Dec  3 16:08:35 np0005544708 podman[77084]: 2025-12-03 21:08:35.124018083 +0000 UTC m=+0.667747722 container died e7d408957c2bd91ba9870cb546b3a163074fd8e41e656bbbdb6603eb7f0c5473 (image=quay.io/ceph/ceph:v20, name=wizardly_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  3 16:08:35 np0005544708 ceph-mgr[75500]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  3 16:08:35 np0005544708 systemd[1]: var-lib-containers-storage-overlay-65387f9127fbc680e5ce1e823b0c634bbd8781b9a38a6f5d6d5c0c47a30512ec-merged.mount: Deactivated successfully.
Dec  3 16:08:35 np0005544708 podman[77084]: 2025-12-03 21:08:35.163851379 +0000 UTC m=+0.707581008 container remove e7d408957c2bd91ba9870cb546b3a163074fd8e41e656bbbdb6603eb7f0c5473 (image=quay.io/ceph/ceph:v20, name=wizardly_yalow, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:08:35 np0005544708 systemd[1]: libpod-conmon-e7d408957c2bd91ba9870cb546b3a163074fd8e41e656bbbdb6603eb7f0c5473.scope: Deactivated successfully.
Dec  3 16:08:35 np0005544708 podman[77232]: 2025-12-03 21:08:35.238526517 +0000 UTC m=+0.055602397 container create 435b493dd842b04b302bc1288a801cc39422701487c6ed24bd711145141201c4 (image=quay.io/ceph/ceph:v20, name=dreamy_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec  3 16:08:35 np0005544708 systemd[1]: Started libpod-conmon-435b493dd842b04b302bc1288a801cc39422701487c6ed24bd711145141201c4.scope.
Dec  3 16:08:35 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:35 np0005544708 podman[77232]: 2025-12-03 21:08:35.208117275 +0000 UTC m=+0.025193195 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:08:35 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/595a9f842ede8d6fa33fe7138cbad391cacc8d34f8163c6a3b6395a43e70ee6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:35 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/595a9f842ede8d6fa33fe7138cbad391cacc8d34f8163c6a3b6395a43e70ee6d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:35 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/595a9f842ede8d6fa33fe7138cbad391cacc8d34f8163c6a3b6395a43e70ee6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:35 np0005544708 podman[77232]: 2025-12-03 21:08:35.318866846 +0000 UTC m=+0.135942756 container init 435b493dd842b04b302bc1288a801cc39422701487c6ed24bd711145141201c4 (image=quay.io/ceph/ceph:v20, name=dreamy_goodall, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:08:35 np0005544708 podman[77232]: 2025-12-03 21:08:35.324742181 +0000 UTC m=+0.141818021 container start 435b493dd842b04b302bc1288a801cc39422701487c6ed24bd711145141201c4 (image=quay.io/ceph/ceph:v20, name=dreamy_goodall, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:08:35 np0005544708 podman[77232]: 2025-12-03 21:08:35.328403652 +0000 UTC m=+0.145479492 container attach 435b493dd842b04b302bc1288a801cc39422701487c6ed24bd711145141201c4 (image=quay.io/ceph/ceph:v20, name=dreamy_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:08:35 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:08:35 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:35 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:08:35 np0005544708 ceph-mgr[75500]: [cephadm INFO root] Saving service crash spec with placement *
Dec  3 16:08:35 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Dec  3 16:08:35 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec  3 16:08:35 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:35 np0005544708 dreamy_goodall[77248]: Scheduled crash update...
Dec  3 16:08:35 np0005544708 systemd[1]: libpod-435b493dd842b04b302bc1288a801cc39422701487c6ed24bd711145141201c4.scope: Deactivated successfully.
Dec  3 16:08:35 np0005544708 podman[77232]: 2025-12-03 21:08:35.739479298 +0000 UTC m=+0.556555138 container died 435b493dd842b04b302bc1288a801cc39422701487c6ed24bd711145141201c4 (image=quay.io/ceph/ceph:v20, name=dreamy_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:08:35 np0005544708 ceph-mgr[75500]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  3 16:08:35 np0005544708 systemd[1]: var-lib-containers-storage-overlay-595a9f842ede8d6fa33fe7138cbad391cacc8d34f8163c6a3b6395a43e70ee6d-merged.mount: Deactivated successfully.
Dec  3 16:08:35 np0005544708 podman[77232]: 2025-12-03 21:08:35.773033989 +0000 UTC m=+0.590109829 container remove 435b493dd842b04b302bc1288a801cc39422701487c6ed24bd711145141201c4 (image=quay.io/ceph/ceph:v20, name=dreamy_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:08:35 np0005544708 systemd[1]: libpod-conmon-435b493dd842b04b302bc1288a801cc39422701487c6ed24bd711145141201c4.scope: Deactivated successfully.
Dec  3 16:08:35 np0005544708 podman[77357]: 2025-12-03 21:08:35.849335418 +0000 UTC m=+0.051661800 container create e7da159b758f7ba92741d6da82be0c064d31ee497199417ec490eddfe9ec46ae (image=quay.io/ceph/ceph:v20, name=vibrant_greider, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:08:35 np0005544708 systemd[1]: Started libpod-conmon-e7da159b758f7ba92741d6da82be0c064d31ee497199417ec490eddfe9ec46ae.scope.
Dec  3 16:08:35 np0005544708 podman[77357]: 2025-12-03 21:08:35.821984971 +0000 UTC m=+0.024311433 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:08:35 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:35 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65dfe735af4f365273e69b76bf1edcd01459d8ac37d56ae3a310059779cca2bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:35 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65dfe735af4f365273e69b76bf1edcd01459d8ac37d56ae3a310059779cca2bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:35 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65dfe735af4f365273e69b76bf1edcd01459d8ac37d56ae3a310059779cca2bb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:35 np0005544708 podman[77357]: 2025-12-03 21:08:35.941009028 +0000 UTC m=+0.143335430 container init e7da159b758f7ba92741d6da82be0c064d31ee497199417ec490eddfe9ec46ae (image=quay.io/ceph/ceph:v20, name=vibrant_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:08:35 np0005544708 podman[77357]: 2025-12-03 21:08:35.953903747 +0000 UTC m=+0.156230159 container start e7da159b758f7ba92741d6da82be0c064d31ee497199417ec490eddfe9ec46ae (image=quay.io/ceph/ceph:v20, name=vibrant_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:08:35 np0005544708 podman[77357]: 2025-12-03 21:08:35.958417708 +0000 UTC m=+0.160744090 container attach e7da159b758f7ba92741d6da82be0c064d31ee497199417ec490eddfe9ec46ae (image=quay.io/ceph/ceph:v20, name=vibrant_greider, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:08:35 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:35 np0005544708 ceph-mon[75204]: Saving service mgr spec with placement count:2
Dec  3 16:08:35 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:35 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:35 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:36 np0005544708 podman[77427]: 2025-12-03 21:08:36.176131268 +0000 UTC m=+0.064975539 container exec 5be1cf87f4450717250955c63021ef41dba8c1099adb634f3e1f6a0b130fa916 (image=quay.io/ceph/ceph:v20, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mon-compute-0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:08:36 np0005544708 podman[77427]: 2025-12-03 21:08:36.304134907 +0000 UTC m=+0.192979228 container exec_died 5be1cf87f4450717250955c63021ef41dba8c1099adb634f3e1f6a0b130fa916 (image=quay.io/ceph/ceph:v20, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mon-compute-0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  3 16:08:36 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Dec  3 16:08:36 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3131284038' entity='client.admin' 
Dec  3 16:08:36 np0005544708 systemd[1]: libpod-e7da159b758f7ba92741d6da82be0c064d31ee497199417ec490eddfe9ec46ae.scope: Deactivated successfully.
Dec  3 16:08:36 np0005544708 podman[77357]: 2025-12-03 21:08:36.402064652 +0000 UTC m=+0.604391034 container died e7da159b758f7ba92741d6da82be0c064d31ee497199417ec490eddfe9ec46ae (image=quay.io/ceph/ceph:v20, name=vibrant_greider, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:08:36 np0005544708 systemd[1]: var-lib-containers-storage-overlay-65dfe735af4f365273e69b76bf1edcd01459d8ac37d56ae3a310059779cca2bb-merged.mount: Deactivated successfully.
Dec  3 16:08:36 np0005544708 podman[77357]: 2025-12-03 21:08:36.440606595 +0000 UTC m=+0.642932997 container remove e7da159b758f7ba92741d6da82be0c064d31ee497199417ec490eddfe9ec46ae (image=quay.io/ceph/ceph:v20, name=vibrant_greider, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 16:08:36 np0005544708 systemd[1]: libpod-conmon-e7da159b758f7ba92741d6da82be0c064d31ee497199417ec490eddfe9ec46ae.scope: Deactivated successfully.
Dec  3 16:08:36 np0005544708 podman[77518]: 2025-12-03 21:08:36.497377161 +0000 UTC m=+0.038063124 container create ce2b24863cf0f200e28b520c508f696f8695ea603a1159d7d24d2eebe52bae81 (image=quay.io/ceph/ceph:v20, name=nervous_northcutt, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:08:36 np0005544708 systemd[1]: Started libpod-conmon-ce2b24863cf0f200e28b520c508f696f8695ea603a1159d7d24d2eebe52bae81.scope.
Dec  3 16:08:36 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:36 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25ca0dc417f3dcc9536c4d10f622b174df9ee43be3edd695f6c465aae1f16035/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:36 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25ca0dc417f3dcc9536c4d10f622b174df9ee43be3edd695f6c465aae1f16035/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:36 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25ca0dc417f3dcc9536c4d10f622b174df9ee43be3edd695f6c465aae1f16035/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:36 np0005544708 podman[77518]: 2025-12-03 21:08:36.563436436 +0000 UTC m=+0.104122429 container init ce2b24863cf0f200e28b520c508f696f8695ea603a1159d7d24d2eebe52bae81 (image=quay.io/ceph/ceph:v20, name=nervous_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec  3 16:08:36 np0005544708 podman[77518]: 2025-12-03 21:08:36.570178753 +0000 UTC m=+0.110864716 container start ce2b24863cf0f200e28b520c508f696f8695ea603a1159d7d24d2eebe52bae81 (image=quay.io/ceph/ceph:v20, name=nervous_northcutt, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:08:36 np0005544708 podman[77518]: 2025-12-03 21:08:36.573129946 +0000 UTC m=+0.113815939 container attach ce2b24863cf0f200e28b520c508f696f8695ea603a1159d7d24d2eebe52bae81 (image=quay.io/ceph/ceph:v20, name=nervous_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:08:36 np0005544708 podman[77518]: 2025-12-03 21:08:36.481523408 +0000 UTC m=+0.022209401 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:08:36 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:08:36 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:36 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:08:37 np0005544708 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77637 (sysctl)
Dec  3 16:08:37 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:08:37 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Dec  3 16:08:37 np0005544708 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec  3 16:08:37 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:37 np0005544708 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec  3 16:08:37 np0005544708 systemd[1]: libpod-ce2b24863cf0f200e28b520c508f696f8695ea603a1159d7d24d2eebe52bae81.scope: Deactivated successfully.
Dec  3 16:08:37 np0005544708 podman[77643]: 2025-12-03 21:08:37.123660004 +0000 UTC m=+0.046128742 container died ce2b24863cf0f200e28b520c508f696f8695ea603a1159d7d24d2eebe52bae81 (image=quay.io/ceph/ceph:v20, name=nervous_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 16:08:37 np0005544708 ceph-mgr[75500]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  3 16:08:37 np0005544708 systemd[1]: var-lib-containers-storage-overlay-25ca0dc417f3dcc9536c4d10f622b174df9ee43be3edd695f6c465aae1f16035-merged.mount: Deactivated successfully.
Dec  3 16:08:37 np0005544708 podman[77643]: 2025-12-03 21:08:37.172867283 +0000 UTC m=+0.095335961 container remove ce2b24863cf0f200e28b520c508f696f8695ea603a1159d7d24d2eebe52bae81 (image=quay.io/ceph/ceph:v20, name=nervous_northcutt, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  3 16:08:37 np0005544708 systemd[1]: libpod-conmon-ce2b24863cf0f200e28b520c508f696f8695ea603a1159d7d24d2eebe52bae81.scope: Deactivated successfully.
Dec  3 16:08:37 np0005544708 podman[77659]: 2025-12-03 21:08:37.267081926 +0000 UTC m=+0.056248425 container create 001cad78786951b6a0ecb62d19ed5829f0bd83e1288e11966e378b5407bd86fe (image=quay.io/ceph/ceph:v20, name=gracious_buck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  3 16:08:37 np0005544708 systemd[1]: Started libpod-conmon-001cad78786951b6a0ecb62d19ed5829f0bd83e1288e11966e378b5407bd86fe.scope.
Dec  3 16:08:37 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:37 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c0e823dd64ec187dd32439c00821873f0f129dc586cf447ef648a319f02d9a5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:37 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c0e823dd64ec187dd32439c00821873f0f129dc586cf447ef648a319f02d9a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:37 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c0e823dd64ec187dd32439c00821873f0f129dc586cf447ef648a319f02d9a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:37 np0005544708 podman[77659]: 2025-12-03 21:08:37.246664229 +0000 UTC m=+0.035830738 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:08:37 np0005544708 podman[77659]: 2025-12-03 21:08:37.344327477 +0000 UTC m=+0.133494026 container init 001cad78786951b6a0ecb62d19ed5829f0bd83e1288e11966e378b5407bd86fe (image=quay.io/ceph/ceph:v20, name=gracious_buck, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec  3 16:08:37 np0005544708 podman[77659]: 2025-12-03 21:08:37.355819422 +0000 UTC m=+0.144985921 container start 001cad78786951b6a0ecb62d19ed5829f0bd83e1288e11966e378b5407bd86fe (image=quay.io/ceph/ceph:v20, name=gracious_buck, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec  3 16:08:37 np0005544708 podman[77659]: 2025-12-03 21:08:37.359798961 +0000 UTC m=+0.148965460 container attach 001cad78786951b6a0ecb62d19ed5829f0bd83e1288e11966e378b5407bd86fe (image=quay.io/ceph/ceph:v20, name=gracious_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:08:37 np0005544708 ceph-mon[75204]: Saving service crash spec with placement *
Dec  3 16:08:37 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/3131284038' entity='client.admin' 
Dec  3 16:08:37 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:37 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:37 np0005544708 ceph-mgr[75500]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  3 16:08:37 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:08:37 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  3 16:08:37 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:37 np0005544708 ceph-mgr[75500]: [cephadm INFO root] Added label _admin to host compute-0
Dec  3 16:08:37 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Dec  3 16:08:37 np0005544708 gracious_buck[77680]: Added label _admin to host compute-0
Dec  3 16:08:37 np0005544708 systemd[1]: libpod-001cad78786951b6a0ecb62d19ed5829f0bd83e1288e11966e378b5407bd86fe.scope: Deactivated successfully.
Dec  3 16:08:37 np0005544708 podman[77659]: 2025-12-03 21:08:37.835357083 +0000 UTC m=+0.624523592 container died 001cad78786951b6a0ecb62d19ed5829f0bd83e1288e11966e378b5407bd86fe (image=quay.io/ceph/ceph:v20, name=gracious_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 16:08:37 np0005544708 systemd[1]: var-lib-containers-storage-overlay-8c0e823dd64ec187dd32439c00821873f0f129dc586cf447ef648a319f02d9a5-merged.mount: Deactivated successfully.
Dec  3 16:08:37 np0005544708 podman[77659]: 2025-12-03 21:08:37.876084581 +0000 UTC m=+0.665251060 container remove 001cad78786951b6a0ecb62d19ed5829f0bd83e1288e11966e378b5407bd86fe (image=quay.io/ceph/ceph:v20, name=gracious_buck, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:08:37 np0005544708 systemd[1]: libpod-conmon-001cad78786951b6a0ecb62d19ed5829f0bd83e1288e11966e378b5407bd86fe.scope: Deactivated successfully.
Dec  3 16:08:37 np0005544708 podman[77789]: 2025-12-03 21:08:37.963850694 +0000 UTC m=+0.051207449 container create e4b89c0cea1bdba758d6b79d9c0941b9ae85d72a52442ebe13c2295736120122 (image=quay.io/ceph/ceph:v20, name=nice_grothendieck, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:08:37 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:08:37 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:38 np0005544708 systemd[1]: Started libpod-conmon-e4b89c0cea1bdba758d6b79d9c0941b9ae85d72a52442ebe13c2295736120122.scope.
Dec  3 16:08:38 np0005544708 podman[77789]: 2025-12-03 21:08:37.942723291 +0000 UTC m=+0.030080026 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:08:38 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:38 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7615940a10a6f245efa62cbf82fdc902280c7473a658104463abd64436ad4877/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:38 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7615940a10a6f245efa62cbf82fdc902280c7473a658104463abd64436ad4877/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:38 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7615940a10a6f245efa62cbf82fdc902280c7473a658104463abd64436ad4877/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:38 np0005544708 podman[77789]: 2025-12-03 21:08:38.068373512 +0000 UTC m=+0.155730247 container init e4b89c0cea1bdba758d6b79d9c0941b9ae85d72a52442ebe13c2295736120122 (image=quay.io/ceph/ceph:v20, name=nice_grothendieck, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:08:38 np0005544708 podman[77789]: 2025-12-03 21:08:38.076666927 +0000 UTC m=+0.164023652 container start e4b89c0cea1bdba758d6b79d9c0941b9ae85d72a52442ebe13c2295736120122 (image=quay.io/ceph/ceph:v20, name=nice_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:08:38 np0005544708 podman[77789]: 2025-12-03 21:08:38.080199794 +0000 UTC m=+0.167556559 container attach e4b89c0cea1bdba758d6b79d9c0941b9ae85d72a52442ebe13c2295736120122 (image=quay.io/ceph/ceph:v20, name=nice_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  3 16:08:38 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:38 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:38 np0005544708 podman[77898]: 2025-12-03 21:08:38.435775037 +0000 UTC m=+0.044846731 container create 930dd014193e0a3292c67c02510b44915495c89526cb1e00e3d0c396b32807b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_neumann, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:08:38 np0005544708 systemd[1]: Started libpod-conmon-930dd014193e0a3292c67c02510b44915495c89526cb1e00e3d0c396b32807b3.scope.
Dec  3 16:08:38 np0005544708 podman[77898]: 2025-12-03 21:08:38.413347912 +0000 UTC m=+0.022419646 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:08:38 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:38 np0005544708 podman[77898]: 2025-12-03 21:08:38.530005539 +0000 UTC m=+0.139077253 container init 930dd014193e0a3292c67c02510b44915495c89526cb1e00e3d0c396b32807b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_neumann, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:08:38 np0005544708 podman[77898]: 2025-12-03 21:08:38.537077155 +0000 UTC m=+0.146148839 container start 930dd014193e0a3292c67c02510b44915495c89526cb1e00e3d0c396b32807b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_neumann, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  3 16:08:38 np0005544708 brave_neumann[77914]: 167 167
Dec  3 16:08:38 np0005544708 systemd[1]: libpod-930dd014193e0a3292c67c02510b44915495c89526cb1e00e3d0c396b32807b3.scope: Deactivated successfully.
Dec  3 16:08:38 np0005544708 podman[77898]: 2025-12-03 21:08:38.540737875 +0000 UTC m=+0.149809599 container attach 930dd014193e0a3292c67c02510b44915495c89526cb1e00e3d0c396b32807b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_neumann, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec  3 16:08:38 np0005544708 podman[77898]: 2025-12-03 21:08:38.542000547 +0000 UTC m=+0.151072241 container died 930dd014193e0a3292c67c02510b44915495c89526cb1e00e3d0c396b32807b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_neumann, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 16:08:38 np0005544708 systemd[1]: var-lib-containers-storage-overlay-1fa5c6f6bb0c1e3bff4e6d42fe048791a0deb73049268a955733841eb0168f24-merged.mount: Deactivated successfully.
Dec  3 16:08:38 np0005544708 podman[77898]: 2025-12-03 21:08:38.57891445 +0000 UTC m=+0.187986134 container remove 930dd014193e0a3292c67c02510b44915495c89526cb1e00e3d0c396b32807b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_neumann, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:08:38 np0005544708 systemd[1]: libpod-conmon-930dd014193e0a3292c67c02510b44915495c89526cb1e00e3d0c396b32807b3.scope: Deactivated successfully.
Dec  3 16:08:38 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Dec  3 16:08:38 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2346052553' entity='client.admin' 
Dec  3 16:08:38 np0005544708 nice_grothendieck[77835]: set mgr/dashboard/cluster/status
Dec  3 16:08:38 np0005544708 systemd[1]: libpod-e4b89c0cea1bdba758d6b79d9c0941b9ae85d72a52442ebe13c2295736120122.scope: Deactivated successfully.
Dec  3 16:08:38 np0005544708 podman[77789]: 2025-12-03 21:08:38.672701102 +0000 UTC m=+0.760057877 container died e4b89c0cea1bdba758d6b79d9c0941b9ae85d72a52442ebe13c2295736120122 (image=quay.io/ceph/ceph:v20, name=nice_grothendieck, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec  3 16:08:38 np0005544708 systemd[1]: var-lib-containers-storage-overlay-7615940a10a6f245efa62cbf82fdc902280c7473a658104463abd64436ad4877-merged.mount: Deactivated successfully.
Dec  3 16:08:38 np0005544708 podman[77789]: 2025-12-03 21:08:38.721264355 +0000 UTC m=+0.808621100 container remove e4b89c0cea1bdba758d6b79d9c0941b9ae85d72a52442ebe13c2295736120122 (image=quay.io/ceph/ceph:v20, name=nice_grothendieck, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:08:38 np0005544708 systemd[1]: libpod-conmon-e4b89c0cea1bdba758d6b79d9c0941b9ae85d72a52442ebe13c2295736120122.scope: Deactivated successfully.
Dec  3 16:08:38 np0005544708 systemd[1]: Reloading.
Dec  3 16:08:38 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:08:38 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:08:39 np0005544708 ceph-mgr[75500]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  3 16:08:39 np0005544708 podman[77993]: 2025-12-03 21:08:39.348845621 +0000 UTC m=+0.073588364 container create 7d243ddd0e6c8418163555a177178b32dfa9e9d61d9a7d89bf78662d1adf4182 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec  3 16:08:39 np0005544708 systemd[1]: Started libpod-conmon-7d243ddd0e6c8418163555a177178b32dfa9e9d61d9a7d89bf78662d1adf4182.scope.
Dec  3 16:08:39 np0005544708 ceph-mon[75204]: Added label _admin to host compute-0
Dec  3 16:08:39 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/2346052553' entity='client.admin' 
Dec  3 16:08:39 np0005544708 podman[77993]: 2025-12-03 21:08:39.322317703 +0000 UTC m=+0.047060526 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:08:39 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:39 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d448160e9d5320d3b3e4dbb0b6b8a7e2e88348766febda51d0a93f9e9a89adb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:39 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d448160e9d5320d3b3e4dbb0b6b8a7e2e88348766febda51d0a93f9e9a89adb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:39 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d448160e9d5320d3b3e4dbb0b6b8a7e2e88348766febda51d0a93f9e9a89adb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:39 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d448160e9d5320d3b3e4dbb0b6b8a7e2e88348766febda51d0a93f9e9a89adb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:39 np0005544708 podman[77993]: 2025-12-03 21:08:39.471099396 +0000 UTC m=+0.195842149 container init 7d243ddd0e6c8418163555a177178b32dfa9e9d61d9a7d89bf78662d1adf4182 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_buck, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec  3 16:08:39 np0005544708 podman[77993]: 2025-12-03 21:08:39.478411558 +0000 UTC m=+0.203154301 container start 7d243ddd0e6c8418163555a177178b32dfa9e9d61d9a7d89bf78662d1adf4182 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_buck, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:08:39 np0005544708 podman[77993]: 2025-12-03 21:08:39.482322404 +0000 UTC m=+0.207065157 container attach 7d243ddd0e6c8418163555a177178b32dfa9e9d61d9a7d89bf78662d1adf4182 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_buck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  3 16:08:39 np0005544708 python3[78036]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid c21de27e-a7fd-594b-8324-0697ba9aab3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:08:39 np0005544708 podman[78039]: 2025-12-03 21:08:39.673859146 +0000 UTC m=+0.041440377 container create c4cffc19d026111f1c7501b84e9375b4c8a590785a0f42ea04cbf035c05eca17 (image=quay.io/ceph/ceph:v20, name=agitated_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  3 16:08:39 np0005544708 systemd[1]: Started libpod-conmon-c4cffc19d026111f1c7501b84e9375b4c8a590785a0f42ea04cbf035c05eca17.scope.
Dec  3 16:08:39 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:39 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/959160143b82d12ceb61371b8def0ebe580cce413adc81a9fb578e16b97dc8af/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:39 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/959160143b82d12ceb61371b8def0ebe580cce413adc81a9fb578e16b97dc8af/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:39 np0005544708 podman[78039]: 2025-12-03 21:08:39.745684334 +0000 UTC m=+0.113265655 container init c4cffc19d026111f1c7501b84e9375b4c8a590785a0f42ea04cbf035c05eca17 (image=quay.io/ceph/ceph:v20, name=agitated_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec  3 16:08:39 np0005544708 podman[78039]: 2025-12-03 21:08:39.653308647 +0000 UTC m=+0.020889898 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:08:39 np0005544708 ceph-mgr[75500]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  3 16:08:39 np0005544708 podman[78039]: 2025-12-03 21:08:39.757932828 +0000 UTC m=+0.125514089 container start c4cffc19d026111f1c7501b84e9375b4c8a590785a0f42ea04cbf035c05eca17 (image=quay.io/ceph/ceph:v20, name=agitated_galois, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Dec  3 16:08:39 np0005544708 podman[78039]: 2025-12-03 21:08:39.761980917 +0000 UTC m=+0.129562148 container attach c4cffc19d026111f1c7501b84e9375b4c8a590785a0f42ea04cbf035c05eca17 (image=quay.io/ceph/ceph:v20, name=agitated_galois, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:08:40 np0005544708 sad_buck[78008]: [
Dec  3 16:08:40 np0005544708 sad_buck[78008]:    {
Dec  3 16:08:40 np0005544708 sad_buck[78008]:        "available": false,
Dec  3 16:08:40 np0005544708 sad_buck[78008]:        "being_replaced": false,
Dec  3 16:08:40 np0005544708 sad_buck[78008]:        "ceph_device_lvm": false,
Dec  3 16:08:40 np0005544708 sad_buck[78008]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec  3 16:08:40 np0005544708 sad_buck[78008]:        "lsm_data": {},
Dec  3 16:08:40 np0005544708 sad_buck[78008]:        "lvs": [],
Dec  3 16:08:40 np0005544708 sad_buck[78008]:        "path": "/dev/sr0",
Dec  3 16:08:40 np0005544708 sad_buck[78008]:        "rejected_reasons": [
Dec  3 16:08:40 np0005544708 sad_buck[78008]:            "Insufficient space (<5GB)",
Dec  3 16:08:40 np0005544708 sad_buck[78008]:            "Has a FileSystem"
Dec  3 16:08:40 np0005544708 sad_buck[78008]:        ],
Dec  3 16:08:40 np0005544708 sad_buck[78008]:        "sys_api": {
Dec  3 16:08:40 np0005544708 sad_buck[78008]:            "actuators": null,
Dec  3 16:08:40 np0005544708 sad_buck[78008]:            "device_nodes": [
Dec  3 16:08:40 np0005544708 sad_buck[78008]:                "sr0"
Dec  3 16:08:40 np0005544708 sad_buck[78008]:            ],
Dec  3 16:08:40 np0005544708 sad_buck[78008]:            "devname": "sr0",
Dec  3 16:08:40 np0005544708 sad_buck[78008]:            "human_readable_size": "482.00 KB",
Dec  3 16:08:40 np0005544708 sad_buck[78008]:            "id_bus": "ata",
Dec  3 16:08:40 np0005544708 sad_buck[78008]:            "model": "QEMU DVD-ROM",
Dec  3 16:08:40 np0005544708 sad_buck[78008]:            "nr_requests": "2",
Dec  3 16:08:40 np0005544708 sad_buck[78008]:            "parent": "/dev/sr0",
Dec  3 16:08:40 np0005544708 sad_buck[78008]:            "partitions": {},
Dec  3 16:08:40 np0005544708 sad_buck[78008]:            "path": "/dev/sr0",
Dec  3 16:08:40 np0005544708 sad_buck[78008]:            "removable": "1",
Dec  3 16:08:40 np0005544708 sad_buck[78008]:            "rev": "2.5+",
Dec  3 16:08:40 np0005544708 sad_buck[78008]:            "ro": "0",
Dec  3 16:08:40 np0005544708 sad_buck[78008]:            "rotational": "1",
Dec  3 16:08:40 np0005544708 sad_buck[78008]:            "sas_address": "",
Dec  3 16:08:40 np0005544708 sad_buck[78008]:            "sas_device_handle": "",
Dec  3 16:08:40 np0005544708 sad_buck[78008]:            "scheduler_mode": "mq-deadline",
Dec  3 16:08:40 np0005544708 sad_buck[78008]:            "sectors": 0,
Dec  3 16:08:40 np0005544708 sad_buck[78008]:            "sectorsize": "2048",
Dec  3 16:08:40 np0005544708 sad_buck[78008]:            "size": 493568.0,
Dec  3 16:08:40 np0005544708 sad_buck[78008]:            "support_discard": "2048",
Dec  3 16:08:40 np0005544708 sad_buck[78008]:            "type": "disk",
Dec  3 16:08:40 np0005544708 sad_buck[78008]:            "vendor": "QEMU"
Dec  3 16:08:40 np0005544708 sad_buck[78008]:        }
Dec  3 16:08:40 np0005544708 sad_buck[78008]:    }
Dec  3 16:08:40 np0005544708 sad_buck[78008]: ]
Dec  3 16:08:40 np0005544708 systemd[1]: libpod-7d243ddd0e6c8418163555a177178b32dfa9e9d61d9a7d89bf78662d1adf4182.scope: Deactivated successfully.
Dec  3 16:08:40 np0005544708 conmon[78008]: conmon 7d243ddd0e6c84181635 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7d243ddd0e6c8418163555a177178b32dfa9e9d61d9a7d89bf78662d1adf4182.scope/container/memory.events
Dec  3 16:08:40 np0005544708 podman[77993]: 2025-12-03 21:08:40.056128889 +0000 UTC m=+0.780871652 container died 7d243ddd0e6c8418163555a177178b32dfa9e9d61d9a7d89bf78662d1adf4182 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_buck, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec  3 16:08:40 np0005544708 systemd[1]: var-lib-containers-storage-overlay-2d448160e9d5320d3b3e4dbb0b6b8a7e2e88348766febda51d0a93f9e9a89adb-merged.mount: Deactivated successfully.
Dec  3 16:08:40 np0005544708 podman[77993]: 2025-12-03 21:08:40.106730552 +0000 UTC m=+0.831473335 container remove 7d243ddd0e6c8418163555a177178b32dfa9e9d61d9a7d89bf78662d1adf4182 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_buck, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  3 16:08:40 np0005544708 systemd[1]: libpod-conmon-7d243ddd0e6c8418163555a177178b32dfa9e9d61d9a7d89bf78662d1adf4182.scope: Deactivated successfully.
Dec  3 16:08:40 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Dec  3 16:08:40 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2409020231' entity='client.admin' 
Dec  3 16:08:40 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:08:40 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:40 np0005544708 systemd[1]: libpod-c4cffc19d026111f1c7501b84e9375b4c8a590785a0f42ea04cbf035c05eca17.scope: Deactivated successfully.
Dec  3 16:08:40 np0005544708 podman[78039]: 2025-12-03 21:08:40.174728345 +0000 UTC m=+0.542309586 container died c4cffc19d026111f1c7501b84e9375b4c8a590785a0f42ea04cbf035c05eca17 (image=quay.io/ceph/ceph:v20, name=agitated_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:08:40 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:08:40 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:40 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:08:40 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:40 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:08:40 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:40 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec  3 16:08:40 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec  3 16:08:40 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:08:40 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:08:40 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  3 16:08:40 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:08:40 np0005544708 systemd[1]: var-lib-containers-storage-overlay-959160143b82d12ceb61371b8def0ebe580cce413adc81a9fb578e16b97dc8af-merged.mount: Deactivated successfully.
Dec  3 16:08:40 np0005544708 ceph-mgr[75500]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec  3 16:08:40 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec  3 16:08:40 np0005544708 podman[78039]: 2025-12-03 21:08:40.219670747 +0000 UTC m=+0.587251988 container remove c4cffc19d026111f1c7501b84e9375b4c8a590785a0f42ea04cbf035c05eca17 (image=quay.io/ceph/ceph:v20, name=agitated_galois, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  3 16:08:40 np0005544708 systemd[1]: libpod-conmon-c4cffc19d026111f1c7501b84e9375b4c8a590785a0f42ea04cbf035c05eca17.scope: Deactivated successfully.
Dec  3 16:08:40 np0005544708 ceph-mgr[75500]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/c21de27e-a7fd-594b-8324-0697ba9aab3a/config/ceph.conf
Dec  3 16:08:40 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/c21de27e-a7fd-594b-8324-0697ba9aab3a/config/ceph.conf
Dec  3 16:08:41 np0005544708 ceph-mgr[75500]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Dec  3 16:08:41 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 16:08:41 np0005544708 ceph-mon[75204]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec  3 16:08:41 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/2409020231' entity='client.admin' 
Dec  3 16:08:41 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:41 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:41 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:41 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:41 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec  3 16:08:41 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:08:41 np0005544708 ceph-mon[75204]: Updating compute-0:/etc/ceph/ceph.conf
Dec  3 16:08:41 np0005544708 ceph-mon[75204]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec  3 16:08:41 np0005544708 ansible-async_wrapper.py[79301]: Invoked with j236472321791 30 /home/zuul/.ansible/tmp/ansible-tmp-1764796120.577372-36618-79383468006511/AnsiballZ_command.py _
Dec  3 16:08:41 np0005544708 ansible-async_wrapper.py[79353]: Starting module and watcher
Dec  3 16:08:41 np0005544708 ansible-async_wrapper.py[79353]: Start watching 79354 (30)
Dec  3 16:08:41 np0005544708 ansible-async_wrapper.py[79354]: Start module (79354)
Dec  3 16:08:41 np0005544708 ansible-async_wrapper.py[79301]: Return async_wrapper task started.
Dec  3 16:08:41 np0005544708 python3[79355]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid c21de27e-a7fd-594b-8324-0697ba9aab3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:08:41 np0005544708 podman[79406]: 2025-12-03 21:08:41.459285155 +0000 UTC m=+0.062867607 container create fccf07bccd2930f03f7d6bce7e1e1f526a0f7ac42fccee976b37a61114281376 (image=quay.io/ceph/ceph:v20, name=bold_shirley, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:08:41 np0005544708 systemd[1]: Started libpod-conmon-fccf07bccd2930f03f7d6bce7e1e1f526a0f7ac42fccee976b37a61114281376.scope.
Dec  3 16:08:41 np0005544708 podman[79406]: 2025-12-03 21:08:41.432979024 +0000 UTC m=+0.036561476 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:08:41 np0005544708 ceph-mgr[75500]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  3 16:08:41 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  3 16:08:41 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:41 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b87e353a529284a1e7e536b1ed1839b2c9c931d23cf5be0e34926e7bd9747ac/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:41 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b87e353a529284a1e7e536b1ed1839b2c9c931d23cf5be0e34926e7bd9747ac/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:41 np0005544708 podman[79406]: 2025-12-03 21:08:41.56612536 +0000 UTC m=+0.169707892 container init fccf07bccd2930f03f7d6bce7e1e1f526a0f7ac42fccee976b37a61114281376 (image=quay.io/ceph/ceph:v20, name=bold_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  3 16:08:41 np0005544708 podman[79406]: 2025-12-03 21:08:41.57463442 +0000 UTC m=+0.178216862 container start fccf07bccd2930f03f7d6bce7e1e1f526a0f7ac42fccee976b37a61114281376 (image=quay.io/ceph/ceph:v20, name=bold_shirley, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3)
Dec  3 16:08:41 np0005544708 podman[79406]: 2025-12-03 21:08:41.578338252 +0000 UTC m=+0.181920684 container attach fccf07bccd2930f03f7d6bce7e1e1f526a0f7ac42fccee976b37a61114281376 (image=quay.io/ceph/ceph:v20, name=bold_shirley, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:08:41 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:08:41 np0005544708 ceph-mgr[75500]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  3 16:08:42 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  3 16:08:42 np0005544708 bold_shirley[79470]: 
Dec  3 16:08:42 np0005544708 bold_shirley[79470]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  3 16:08:42 np0005544708 systemd[1]: libpod-fccf07bccd2930f03f7d6bce7e1e1f526a0f7ac42fccee976b37a61114281376.scope: Deactivated successfully.
Dec  3 16:08:42 np0005544708 podman[79406]: 2025-12-03 21:08:42.033948321 +0000 UTC m=+0.637530783 container died fccf07bccd2930f03f7d6bce7e1e1f526a0f7ac42fccee976b37a61114281376 (image=quay.io/ceph/ceph:v20, name=bold_shirley, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:08:42 np0005544708 systemd[1]: var-lib-containers-storage-overlay-9b87e353a529284a1e7e536b1ed1839b2c9c931d23cf5be0e34926e7bd9747ac-merged.mount: Deactivated successfully.
Dec  3 16:08:42 np0005544708 podman[79406]: 2025-12-03 21:08:42.075749296 +0000 UTC m=+0.679331738 container remove fccf07bccd2930f03f7d6bce7e1e1f526a0f7ac42fccee976b37a61114281376 (image=quay.io/ceph/ceph:v20, name=bold_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec  3 16:08:42 np0005544708 ansible-async_wrapper.py[79354]: Module complete (79354)
Dec  3 16:08:42 np0005544708 systemd[1]: libpod-conmon-fccf07bccd2930f03f7d6bce7e1e1f526a0f7ac42fccee976b37a61114281376.scope: Deactivated successfully.
Dec  3 16:08:42 np0005544708 ceph-mon[75204]: Updating compute-0:/var/lib/ceph/c21de27e-a7fd-594b-8324-0697ba9aab3a/config/ceph.conf
Dec  3 16:08:42 np0005544708 ceph-mgr[75500]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/c21de27e-a7fd-594b-8324-0697ba9aab3a/config/ceph.client.admin.keyring
Dec  3 16:08:42 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/c21de27e-a7fd-594b-8324-0697ba9aab3a/config/ceph.client.admin.keyring
Dec  3 16:08:42 np0005544708 python3[79884]: ansible-ansible.legacy.async_status Invoked with jid=j236472321791.79301 mode=status _async_dir=/root/.ansible_async
Dec  3 16:08:42 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:08:42 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:42 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:08:42 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:42 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  3 16:08:42 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:42 np0005544708 ceph-mgr[75500]: [progress INFO root] update: starting ev db5c9dd6-9943-4ed1-bec4-7b67ea0da67c (Updating crash deployment (+1 -> 1))
Dec  3 16:08:42 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec  3 16:08:42 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec  3 16:08:42 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  3 16:08:42 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:08:42 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:08:42 np0005544708 ceph-mgr[75500]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Dec  3 16:08:42 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Dec  3 16:08:43 np0005544708 python3[80051]: ansible-ansible.legacy.async_status Invoked with jid=j236472321791.79301 mode=cleanup _async_dir=/root/.ansible_async
Dec  3 16:08:43 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 16:08:43 np0005544708 ceph-mon[75204]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  3 16:08:43 np0005544708 ceph-mon[75204]: Updating compute-0:/var/lib/ceph/c21de27e-a7fd-594b-8324-0697ba9aab3a/config/ceph.client.admin.keyring
Dec  3 16:08:43 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:43 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:43 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:43 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec  3 16:08:43 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  3 16:08:43 np0005544708 python3[80152]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  3 16:08:43 np0005544708 podman[80172]: 2025-12-03 21:08:43.578536598 +0000 UTC m=+0.068315072 container create 1f193c52cc44ac034d8c4f723c4d9bd32d9a3c2eb891f196c2a1263d55976d78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_heisenberg, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec  3 16:08:43 np0005544708 systemd[1]: Started libpod-conmon-1f193c52cc44ac034d8c4f723c4d9bd32d9a3c2eb891f196c2a1263d55976d78.scope.
Dec  3 16:08:43 np0005544708 podman[80172]: 2025-12-03 21:08:43.54830932 +0000 UTC m=+0.038087864 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:08:43 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:43 np0005544708 podman[80172]: 2025-12-03 21:08:43.672261658 +0000 UTC m=+0.162040152 container init 1f193c52cc44ac034d8c4f723c4d9bd32d9a3c2eb891f196c2a1263d55976d78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_heisenberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec  3 16:08:43 np0005544708 podman[80172]: 2025-12-03 21:08:43.678712558 +0000 UTC m=+0.168491002 container start 1f193c52cc44ac034d8c4f723c4d9bd32d9a3c2eb891f196c2a1263d55976d78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_heisenberg, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:08:43 np0005544708 podman[80172]: 2025-12-03 21:08:43.682595745 +0000 UTC m=+0.172374219 container attach 1f193c52cc44ac034d8c4f723c4d9bd32d9a3c2eb891f196c2a1263d55976d78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_heisenberg, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:08:43 np0005544708 flamboyant_heisenberg[80191]: 167 167
Dec  3 16:08:43 np0005544708 systemd[1]: libpod-1f193c52cc44ac034d8c4f723c4d9bd32d9a3c2eb891f196c2a1263d55976d78.scope: Deactivated successfully.
Dec  3 16:08:43 np0005544708 podman[80172]: 2025-12-03 21:08:43.685275231 +0000 UTC m=+0.175053715 container died 1f193c52cc44ac034d8c4f723c4d9bd32d9a3c2eb891f196c2a1263d55976d78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_heisenberg, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:08:43 np0005544708 systemd[1]: var-lib-containers-storage-overlay-14e6289cf6278c15034ca5b774064a47f3a7949cfd34367cb71476760f5ac2a5-merged.mount: Deactivated successfully.
Dec  3 16:08:43 np0005544708 podman[80172]: 2025-12-03 21:08:43.725446895 +0000 UTC m=+0.215225339 container remove 1f193c52cc44ac034d8c4f723c4d9bd32d9a3c2eb891f196c2a1263d55976d78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec  3 16:08:43 np0005544708 systemd[1]: libpod-conmon-1f193c52cc44ac034d8c4f723c4d9bd32d9a3c2eb891f196c2a1263d55976d78.scope: Deactivated successfully.
Dec  3 16:08:43 np0005544708 ceph-mgr[75500]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  3 16:08:43 np0005544708 systemd[1]: Reloading.
Dec  3 16:08:43 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:08:43 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:08:44 np0005544708 systemd[1]: Reloading.
Dec  3 16:08:44 np0005544708 ceph-mon[75204]: Deploying daemon crash.compute-0 on compute-0
Dec  3 16:08:44 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:08:44 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:08:44 np0005544708 python3[80271]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid c21de27e-a7fd-594b-8324-0697ba9aab3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:08:44 np0005544708 podman[80310]: 2025-12-03 21:08:44.298990654 +0000 UTC m=+0.068554539 container create 7f2de2732dadf5048c84c2ee2e47df3e9d7e817fc291d5a3262a708190a87894 (image=quay.io/ceph/ceph:v20, name=eager_khorana, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:08:44 np0005544708 podman[80310]: 2025-12-03 21:08:44.270845277 +0000 UTC m=+0.040409172 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:08:44 np0005544708 systemd[1]: Started libpod-conmon-7f2de2732dadf5048c84c2ee2e47df3e9d7e817fc291d5a3262a708190a87894.scope.
Dec  3 16:08:44 np0005544708 systemd[1]: Starting Ceph crash.compute-0 for c21de27e-a7fd-594b-8324-0697ba9aab3a...
Dec  3 16:08:44 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:44 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64babdf42d8e77669e50cba940473111ffb176c1aac908ac788366e9e917b8d0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:44 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64babdf42d8e77669e50cba940473111ffb176c1aac908ac788366e9e917b8d0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:44 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64babdf42d8e77669e50cba940473111ffb176c1aac908ac788366e9e917b8d0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:44 np0005544708 podman[80310]: 2025-12-03 21:08:44.432781455 +0000 UTC m=+0.202345350 container init 7f2de2732dadf5048c84c2ee2e47df3e9d7e817fc291d5a3262a708190a87894 (image=quay.io/ceph/ceph:v20, name=eager_khorana, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:08:44 np0005544708 podman[80310]: 2025-12-03 21:08:44.440548757 +0000 UTC m=+0.210112612 container start 7f2de2732dadf5048c84c2ee2e47df3e9d7e817fc291d5a3262a708190a87894 (image=quay.io/ceph/ceph:v20, name=eager_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 16:08:44 np0005544708 podman[80310]: 2025-12-03 21:08:44.444037894 +0000 UTC m=+0.213601759 container attach 7f2de2732dadf5048c84c2ee2e47df3e9d7e817fc291d5a3262a708190a87894 (image=quay.io/ceph/ceph:v20, name=eager_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:08:44 np0005544708 podman[80399]: 2025-12-03 21:08:44.636291673 +0000 UTC m=+0.044226076 container create 4b1e1515111c30158193d0a92c01f0c36debcabdefc45ebea0a47572a2dd93c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-crash-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 16:08:44 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b68ea16925b1ec6a0640fec20d7b7c2947c5283d548e299f57f55ceaf0ef380/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:44 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b68ea16925b1ec6a0640fec20d7b7c2947c5283d548e299f57f55ceaf0ef380/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:44 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b68ea16925b1ec6a0640fec20d7b7c2947c5283d548e299f57f55ceaf0ef380/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:44 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b68ea16925b1ec6a0640fec20d7b7c2947c5283d548e299f57f55ceaf0ef380/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:44 np0005544708 podman[80399]: 2025-12-03 21:08:44.708070501 +0000 UTC m=+0.116004954 container init 4b1e1515111c30158193d0a92c01f0c36debcabdefc45ebea0a47572a2dd93c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-crash-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Dec  3 16:08:44 np0005544708 podman[80399]: 2025-12-03 21:08:44.615627632 +0000 UTC m=+0.023562055 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:08:44 np0005544708 podman[80399]: 2025-12-03 21:08:44.718627391 +0000 UTC m=+0.126561814 container start 4b1e1515111c30158193d0a92c01f0c36debcabdefc45ebea0a47572a2dd93c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-crash-compute-0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 16:08:44 np0005544708 bash[80399]: 4b1e1515111c30158193d0a92c01f0c36debcabdefc45ebea0a47572a2dd93c1
Dec  3 16:08:44 np0005544708 systemd[1]: Started Ceph crash.compute-0 for c21de27e-a7fd-594b-8324-0697ba9aab3a.
Dec  3 16:08:44 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-crash-compute-0[80413]: INFO:ceph-crash:pinging cluster to exercise our key
Dec  3 16:08:44 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:08:44 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:44 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:08:44 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:44 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec  3 16:08:44 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:44 np0005544708 ceph-mgr[75500]: [progress INFO root] complete: finished ev db5c9dd6-9943-4ed1-bec4-7b67ea0da67c (Updating crash deployment (+1 -> 1))
Dec  3 16:08:44 np0005544708 ceph-mgr[75500]: [progress INFO root] Completed event db5c9dd6-9943-4ed1-bec4-7b67ea0da67c (Updating crash deployment (+1 -> 1)) in 2 seconds
Dec  3 16:08:44 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec  3 16:08:44 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:44 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec  3 16:08:44 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:44 np0005544708 ceph-mgr[75500]: [progress INFO root] update: starting ev 1b645393-3007-4304-b52b-a35e10c6aa55 (Updating mgr deployment (+1 -> 2))
Dec  3 16:08:44 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.jdapcy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec  3 16:08:44 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.jdapcy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec  3 16:08:44 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.jdapcy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec  3 16:08:44 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  3 16:08:44 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "mgr services"} : dispatch
Dec  3 16:08:44 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:08:44 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:08:44 np0005544708 ceph-mgr[75500]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.jdapcy on compute-0
Dec  3 16:08:44 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.jdapcy on compute-0
Dec  3 16:08:44 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  3 16:08:44 np0005544708 eager_khorana[80328]: 
Dec  3 16:08:44 np0005544708 eager_khorana[80328]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  3 16:08:44 np0005544708 systemd[1]: libpod-7f2de2732dadf5048c84c2ee2e47df3e9d7e817fc291d5a3262a708190a87894.scope: Deactivated successfully.
Dec  3 16:08:44 np0005544708 podman[80310]: 2025-12-03 21:08:44.895859109 +0000 UTC m=+0.665423034 container died 7f2de2732dadf5048c84c2ee2e47df3e9d7e817fc291d5a3262a708190a87894 (image=quay.io/ceph/ceph:v20, name=eager_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:08:44 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-crash-compute-0[80413]: 2025-12-03T21:08:44.912+0000 7f35db661640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec  3 16:08:44 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-crash-compute-0[80413]: 2025-12-03T21:08:44.912+0000 7f35db661640 -1 AuthRegistry(0x7f35d4052930) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec  3 16:08:44 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-crash-compute-0[80413]: 2025-12-03T21:08:44.913+0000 7f35db661640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec  3 16:08:44 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-crash-compute-0[80413]: 2025-12-03T21:08:44.913+0000 7f35db661640 -1 AuthRegistry(0x7f35db65ffe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec  3 16:08:44 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-crash-compute-0[80413]: [errno 13] RADOS permission denied (error connecting to the cluster)
Dec  3 16:08:44 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-crash-compute-0[80413]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Dec  3 16:08:44 np0005544708 systemd[1]: var-lib-containers-storage-overlay-64babdf42d8e77669e50cba940473111ffb176c1aac908ac788366e9e917b8d0-merged.mount: Deactivated successfully.
Dec  3 16:08:44 np0005544708 podman[80310]: 2025-12-03 21:08:44.946476492 +0000 UTC m=+0.716040337 container remove 7f2de2732dadf5048c84c2ee2e47df3e9d7e817fc291d5a3262a708190a87894 (image=quay.io/ceph/ceph:v20, name=eager_khorana, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:08:44 np0005544708 systemd[1]: libpod-conmon-7f2de2732dadf5048c84c2ee2e47df3e9d7e817fc291d5a3262a708190a87894.scope: Deactivated successfully.
Dec  3 16:08:45 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 16:08:45 np0005544708 podman[80559]: 2025-12-03 21:08:45.426123946 +0000 UTC m=+0.055449184 container create 8db86bd0acba44697c5c23d720b637d5787aed5f697ffd9bef64abe88e696011 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_kare, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:08:45 np0005544708 python3[80542]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid c21de27e-a7fd-594b-8324-0697ba9aab3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:08:45 np0005544708 systemd[1]: Started libpod-conmon-8db86bd0acba44697c5c23d720b637d5787aed5f697ffd9bef64abe88e696011.scope.
Dec  3 16:08:45 np0005544708 podman[80559]: 2025-12-03 21:08:45.395662972 +0000 UTC m=+0.024988250 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:08:45 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:45 np0005544708 podman[80576]: 2025-12-03 21:08:45.528441929 +0000 UTC m=+0.057916125 container create 7dd0eaaab2485872230f5e3286218a71930163741055fd0f0f28749a5a6b8dee (image=quay.io/ceph/ceph:v20, name=pedantic_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:08:45 np0005544708 podman[80559]: 2025-12-03 21:08:45.5353329 +0000 UTC m=+0.164658218 container init 8db86bd0acba44697c5c23d720b637d5787aed5f697ffd9bef64abe88e696011 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:08:45 np0005544708 podman[80559]: 2025-12-03 21:08:45.546737262 +0000 UTC m=+0.176062490 container start 8db86bd0acba44697c5c23d720b637d5787aed5f697ffd9bef64abe88e696011 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_kare, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:08:45 np0005544708 podman[80559]: 2025-12-03 21:08:45.549645464 +0000 UTC m=+0.178970732 container attach 8db86bd0acba44697c5c23d720b637d5787aed5f697ffd9bef64abe88e696011 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_kare, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec  3 16:08:45 np0005544708 interesting_kare[80582]: 167 167
Dec  3 16:08:45 np0005544708 systemd[1]: libpod-8db86bd0acba44697c5c23d720b637d5787aed5f697ffd9bef64abe88e696011.scope: Deactivated successfully.
Dec  3 16:08:45 np0005544708 podman[80559]: 2025-12-03 21:08:45.551341756 +0000 UTC m=+0.180666994 container died 8db86bd0acba44697c5c23d720b637d5787aed5f697ffd9bef64abe88e696011 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 16:08:45 np0005544708 systemd[1]: Started libpod-conmon-7dd0eaaab2485872230f5e3286218a71930163741055fd0f0f28749a5a6b8dee.scope.
Dec  3 16:08:45 np0005544708 systemd[1]: var-lib-containers-storage-overlay-1d3c4c9e4efce36bff79b2afb30299d002ba6cbb02340ba4ad93e45adc1c78eb-merged.mount: Deactivated successfully.
Dec  3 16:08:45 np0005544708 podman[80559]: 2025-12-03 21:08:45.588425064 +0000 UTC m=+0.217750292 container remove 8db86bd0acba44697c5c23d720b637d5787aed5f697ffd9bef64abe88e696011 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_kare, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:08:45 np0005544708 podman[80576]: 2025-12-03 21:08:45.501057211 +0000 UTC m=+0.030531497 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:08:45 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:45 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9531cb668dcac972a4e526cdde90de7ab00350f65339d88f5e1b979e8a6ab254/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:45 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9531cb668dcac972a4e526cdde90de7ab00350f65339d88f5e1b979e8a6ab254/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:45 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9531cb668dcac972a4e526cdde90de7ab00350f65339d88f5e1b979e8a6ab254/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:45 np0005544708 systemd[1]: libpod-conmon-8db86bd0acba44697c5c23d720b637d5787aed5f697ffd9bef64abe88e696011.scope: Deactivated successfully.
Dec  3 16:08:45 np0005544708 podman[80576]: 2025-12-03 21:08:45.613535856 +0000 UTC m=+0.143010092 container init 7dd0eaaab2485872230f5e3286218a71930163741055fd0f0f28749a5a6b8dee (image=quay.io/ceph/ceph:v20, name=pedantic_stonebraker, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 16:08:45 np0005544708 podman[80576]: 2025-12-03 21:08:45.624172999 +0000 UTC m=+0.153647185 container start 7dd0eaaab2485872230f5e3286218a71930163741055fd0f0f28749a5a6b8dee (image=quay.io/ceph/ceph:v20, name=pedantic_stonebraker, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:08:45 np0005544708 podman[80576]: 2025-12-03 21:08:45.627987623 +0000 UTC m=+0.157461879 container attach 7dd0eaaab2485872230f5e3286218a71930163741055fd0f0f28749a5a6b8dee (image=quay.io/ceph/ceph:v20, name=pedantic_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:08:45 np0005544708 systemd[1]: Reloading.
Dec  3 16:08:45 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:08:45 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:08:45 np0005544708 ceph-mgr[75500]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  3 16:08:45 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:45 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:45 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:45 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:45 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:45 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.jdapcy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec  3 16:08:45 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.jdapcy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec  3 16:08:45 np0005544708 ceph-mon[75204]: Deploying daemon mgr.compute-0.jdapcy on compute-0
Dec  3 16:08:45 np0005544708 systemd[1]: Reloading.
Dec  3 16:08:46 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Dec  3 16:08:46 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/757938590' entity='client.admin' 
Dec  3 16:08:46 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:08:46 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:08:46 np0005544708 podman[80576]: 2025-12-03 21:08:46.105797732 +0000 UTC m=+0.635271928 container died 7dd0eaaab2485872230f5e3286218a71930163741055fd0f0f28749a5a6b8dee (image=quay.io/ceph/ceph:v20, name=pedantic_stonebraker, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  3 16:08:46 np0005544708 ansible-async_wrapper.py[79353]: Done in kid B.
Dec  3 16:08:46 np0005544708 systemd[1]: libpod-7dd0eaaab2485872230f5e3286218a71930163741055fd0f0f28749a5a6b8dee.scope: Deactivated successfully.
Dec  3 16:08:46 np0005544708 systemd[1]: var-lib-containers-storage-overlay-9531cb668dcac972a4e526cdde90de7ab00350f65339d88f5e1b979e8a6ab254-merged.mount: Deactivated successfully.
Dec  3 16:08:46 np0005544708 podman[80576]: 2025-12-03 21:08:46.277547614 +0000 UTC m=+0.807021830 container remove 7dd0eaaab2485872230f5e3286218a71930163741055fd0f0f28749a5a6b8dee (image=quay.io/ceph/ceph:v20, name=pedantic_stonebraker, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Dec  3 16:08:46 np0005544708 systemd[1]: Starting Ceph mgr.compute-0.jdapcy for c21de27e-a7fd-594b-8324-0697ba9aab3a...
Dec  3 16:08:46 np0005544708 systemd[1]: libpod-conmon-7dd0eaaab2485872230f5e3286218a71930163741055fd0f0f28749a5a6b8dee.scope: Deactivated successfully.
Dec  3 16:08:46 np0005544708 python3[80783]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid c21de27e-a7fd-594b-8324-0697ba9aab3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:08:46 np0005544708 podman[80796]: 2025-12-03 21:08:46.630298436 +0000 UTC m=+0.062422386 container create 70b4c07d62442eeb897b819626352cfaa1be138f8695a20ab1ca8ed143214483 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mgr-compute-0-jdapcy, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec  3 16:08:46 np0005544708 podman[80796]: 2025-12-03 21:08:46.598773486 +0000 UTC m=+0.030897526 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:08:46 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fed3044219a64be64de93285b6cd8db400e3b0cdcae1095ce876c7b38efed9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:46 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fed3044219a64be64de93285b6cd8db400e3b0cdcae1095ce876c7b38efed9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:46 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fed3044219a64be64de93285b6cd8db400e3b0cdcae1095ce876c7b38efed9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:46 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fed3044219a64be64de93285b6cd8db400e3b0cdcae1095ce876c7b38efed9d/merged/var/lib/ceph/mgr/ceph-compute-0.jdapcy supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:46 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:08:46 np0005544708 podman[80796]: 2025-12-03 21:08:46.72861032 +0000 UTC m=+0.160734300 container init 70b4c07d62442eeb897b819626352cfaa1be138f8695a20ab1ca8ed143214483 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mgr-compute-0-jdapcy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:08:46 np0005544708 podman[80796]: 2025-12-03 21:08:46.740880573 +0000 UTC m=+0.173004523 container start 70b4c07d62442eeb897b819626352cfaa1be138f8695a20ab1ca8ed143214483 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mgr-compute-0-jdapcy, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:08:46 np0005544708 bash[80796]: 70b4c07d62442eeb897b819626352cfaa1be138f8695a20ab1ca8ed143214483
Dec  3 16:08:46 np0005544708 podman[80809]: 2025-12-03 21:08:46.751113877 +0000 UTC m=+0.084502523 container create 64bdbc3e3538ca2f88b2304144f7243e40aa769b329fc8b0c076f1a1af6352e1 (image=quay.io/ceph/ceph:v20, name=laughing_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:08:46 np0005544708 ceph-mgr[75500]: [progress INFO root] Writing back 1 completed events
Dec  3 16:08:46 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  3 16:08:46 np0005544708 systemd[1]: Started Ceph mgr.compute-0.jdapcy for c21de27e-a7fd-594b-8324-0697ba9aab3a.
Dec  3 16:08:46 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:46 np0005544708 systemd[1]: Started libpod-conmon-64bdbc3e3538ca2f88b2304144f7243e40aa769b329fc8b0c076f1a1af6352e1.scope.
Dec  3 16:08:46 np0005544708 podman[80809]: 2025-12-03 21:08:46.715402773 +0000 UTC m=+0.048812579 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:08:46 np0005544708 ceph-mgr[80827]: set uid:gid to 167:167 (ceph:ceph)
Dec  3 16:08:46 np0005544708 ceph-mgr[80827]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Dec  3 16:08:46 np0005544708 ceph-mgr[80827]: pidfile_write: ignore empty --pid-file
Dec  3 16:08:46 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:08:46 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:46 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:08:46 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:46 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  3 16:08:46 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:46 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:46 np0005544708 ceph-mgr[75500]: [progress INFO root] complete: finished ev 1b645393-3007-4304-b52b-a35e10c6aa55 (Updating mgr deployment (+1 -> 2))
Dec  3 16:08:46 np0005544708 ceph-mgr[75500]: [progress INFO root] Completed event 1b645393-3007-4304-b52b-a35e10c6aa55 (Updating mgr deployment (+1 -> 2)) in 2 seconds
Dec  3 16:08:46 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  3 16:08:46 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf98536e654af307a99fc0de9bbeb9bffc9d01cfc5c508f82a03c1f3ba7168b8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:46 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf98536e654af307a99fc0de9bbeb9bffc9d01cfc5c508f82a03c1f3ba7168b8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:46 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf98536e654af307a99fc0de9bbeb9bffc9d01cfc5c508f82a03c1f3ba7168b8/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:46 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:46 np0005544708 podman[80809]: 2025-12-03 21:08:46.859630184 +0000 UTC m=+0.193018840 container init 64bdbc3e3538ca2f88b2304144f7243e40aa769b329fc8b0c076f1a1af6352e1 (image=quay.io/ceph/ceph:v20, name=laughing_jones, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:08:46 np0005544708 ceph-mgr[80827]: mgr[py] Loading python module 'alerts'
Dec  3 16:08:46 np0005544708 podman[80809]: 2025-12-03 21:08:46.868496633 +0000 UTC m=+0.201885289 container start 64bdbc3e3538ca2f88b2304144f7243e40aa769b329fc8b0c076f1a1af6352e1 (image=quay.io/ceph/ceph:v20, name=laughing_jones, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 16:08:46 np0005544708 podman[80809]: 2025-12-03 21:08:46.871805975 +0000 UTC m=+0.205194611 container attach 64bdbc3e3538ca2f88b2304144f7243e40aa769b329fc8b0c076f1a1af6352e1 (image=quay.io/ceph/ceph:v20, name=laughing_jones, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:08:46 np0005544708 ceph-mgr[80827]: mgr[py] Loading python module 'balancer'
Dec  3 16:08:47 np0005544708 ceph-mgr[80827]: mgr[py] Loading python module 'cephadm'
Dec  3 16:08:47 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/757938590' entity='client.admin' 
Dec  3 16:08:47 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:47 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:47 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:47 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:47 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:47 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 16:08:47 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Dec  3 16:08:47 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1276199958' entity='client.admin' 
Dec  3 16:08:47 np0005544708 systemd[1]: libpod-64bdbc3e3538ca2f88b2304144f7243e40aa769b329fc8b0c076f1a1af6352e1.scope: Deactivated successfully.
Dec  3 16:08:47 np0005544708 podman[80809]: 2025-12-03 21:08:47.32120397 +0000 UTC m=+0.654592606 container died 64bdbc3e3538ca2f88b2304144f7243e40aa769b329fc8b0c076f1a1af6352e1 (image=quay.io/ceph/ceph:v20, name=laughing_jones, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  3 16:08:47 np0005544708 systemd[1]: var-lib-containers-storage-overlay-cf98536e654af307a99fc0de9bbeb9bffc9d01cfc5c508f82a03c1f3ba7168b8-merged.mount: Deactivated successfully.
Dec  3 16:08:47 np0005544708 podman[80809]: 2025-12-03 21:08:47.359803455 +0000 UTC m=+0.693192091 container remove 64bdbc3e3538ca2f88b2304144f7243e40aa769b329fc8b0c076f1a1af6352e1 (image=quay.io/ceph/ceph:v20, name=laughing_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  3 16:08:47 np0005544708 systemd[1]: libpod-conmon-64bdbc3e3538ca2f88b2304144f7243e40aa769b329fc8b0c076f1a1af6352e1.scope: Deactivated successfully.
Dec  3 16:08:47 np0005544708 podman[81015]: 2025-12-03 21:08:47.517126249 +0000 UTC m=+0.048255355 container exec 5be1cf87f4450717250955c63021ef41dba8c1099adb634f3e1f6a0b130fa916 (image=quay.io/ceph/ceph:v20, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mon-compute-0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 16:08:47 np0005544708 podman[81015]: 2025-12-03 21:08:47.619071034 +0000 UTC m=+0.150200140 container exec_died 5be1cf87f4450717250955c63021ef41dba8c1099adb634f3e1f6a0b130fa916 (image=quay.io/ceph/ceph:v20, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:08:47 np0005544708 python3[81062]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid c21de27e-a7fd-594b-8324-0697ba9aab3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:08:47 np0005544708 podman[81087]: 2025-12-03 21:08:47.75059371 +0000 UTC m=+0.037366317 container create c6105edeb663c8ea74899b28822ad662efc9d2b60daa7b55178f19d8f0dbe980 (image=quay.io/ceph/ceph:v20, name=frosty_jennings, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:08:47 np0005544708 ceph-mgr[75500]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  3 16:08:47 np0005544708 ceph-mgr[80827]: mgr[py] Loading python module 'crash'
Dec  3 16:08:47 np0005544708 systemd[1]: Started libpod-conmon-c6105edeb663c8ea74899b28822ad662efc9d2b60daa7b55178f19d8f0dbe980.scope.
Dec  3 16:08:47 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:47 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a926c7b4f2980dfffa2f0c19cf853686255e0a298843ed380ac5796969de547a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:47 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a926c7b4f2980dfffa2f0c19cf853686255e0a298843ed380ac5796969de547a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:47 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a926c7b4f2980dfffa2f0c19cf853686255e0a298843ed380ac5796969de547a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:47 np0005544708 podman[81087]: 2025-12-03 21:08:47.824989521 +0000 UTC m=+0.111762148 container init c6105edeb663c8ea74899b28822ad662efc9d2b60daa7b55178f19d8f0dbe980 (image=quay.io/ceph/ceph:v20, name=frosty_jennings, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 16:08:47 np0005544708 podman[81087]: 2025-12-03 21:08:47.831662497 +0000 UTC m=+0.118435094 container start c6105edeb663c8ea74899b28822ad662efc9d2b60daa7b55178f19d8f0dbe980 (image=quay.io/ceph/ceph:v20, name=frosty_jennings, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:08:47 np0005544708 podman[81087]: 2025-12-03 21:08:47.834241971 +0000 UTC m=+0.121014568 container attach c6105edeb663c8ea74899b28822ad662efc9d2b60daa7b55178f19d8f0dbe980 (image=quay.io/ceph/ceph:v20, name=frosty_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  3 16:08:47 np0005544708 podman[81087]: 2025-12-03 21:08:47.736599543 +0000 UTC m=+0.023372150 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:08:47 np0005544708 ceph-mgr[80827]: mgr[py] Loading python module 'dashboard'
Dec  3 16:08:48 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:08:48 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:48 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:08:48 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:48 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:08:48 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:08:48 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  3 16:08:48 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:08:48 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  3 16:08:48 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:48 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Dec  3 16:08:48 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/88973863' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Dec  3 16:08:48 np0005544708 ceph-mgr[75500]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Dec  3 16:08:48 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Dec  3 16:08:48 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec  3 16:08:48 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec  3 16:08:48 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec  3 16:08:48 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec  3 16:08:48 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:08:48 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:08:48 np0005544708 ceph-mgr[75500]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Dec  3 16:08:48 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Dec  3 16:08:48 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/1276199958' entity='client.admin' 
Dec  3 16:08:48 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:48 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:48 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:08:48 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:48 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/88973863' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Dec  3 16:08:48 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec  3 16:08:48 np0005544708 ceph-mgr[80827]: mgr[py] Loading python module 'devicehealth'
Dec  3 16:08:48 np0005544708 ceph-mgr[80827]: mgr[py] Loading python module 'diskprediction_local'
Dec  3 16:08:48 np0005544708 podman[81283]: 2025-12-03 21:08:48.708388531 +0000 UTC m=+0.041294974 container create ce76797fc46dcb62480e2b286aefba767f975233028638155ef3b6e9a967912d (image=quay.io/ceph/ceph:v20, name=silly_yonath, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:08:48 np0005544708 systemd[1]: Started libpod-conmon-ce76797fc46dcb62480e2b286aefba767f975233028638155ef3b6e9a967912d.scope.
Dec  3 16:08:48 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:48 np0005544708 podman[81283]: 2025-12-03 21:08:48.692847495 +0000 UTC m=+0.025753968 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:08:48 np0005544708 podman[81283]: 2025-12-03 21:08:48.834372299 +0000 UTC m=+0.167278752 container init ce76797fc46dcb62480e2b286aefba767f975233028638155ef3b6e9a967912d (image=quay.io/ceph/ceph:v20, name=silly_yonath, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:08:48 np0005544708 podman[81283]: 2025-12-03 21:08:48.840954602 +0000 UTC m=+0.173861045 container start ce76797fc46dcb62480e2b286aefba767f975233028638155ef3b6e9a967912d (image=quay.io/ceph/ceph:v20, name=silly_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec  3 16:08:48 np0005544708 podman[81283]: 2025-12-03 21:08:48.844997822 +0000 UTC m=+0.177904285 container attach ce76797fc46dcb62480e2b286aefba767f975233028638155ef3b6e9a967912d (image=quay.io/ceph/ceph:v20, name=silly_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  3 16:08:48 np0005544708 silly_yonath[81300]: 167 167
Dec  3 16:08:48 np0005544708 systemd[1]: libpod-ce76797fc46dcb62480e2b286aefba767f975233028638155ef3b6e9a967912d.scope: Deactivated successfully.
Dec  3 16:08:48 np0005544708 podman[81283]: 2025-12-03 21:08:48.846627722 +0000 UTC m=+0.179534165 container died ce76797fc46dcb62480e2b286aefba767f975233028638155ef3b6e9a967912d (image=quay.io/ceph/ceph:v20, name=silly_yonath, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 16:08:48 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mgr-compute-0-jdapcy[80817]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  3 16:08:48 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mgr-compute-0-jdapcy[80817]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  3 16:08:48 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mgr-compute-0-jdapcy[80817]:  from numpy import show_config as show_numpy_config
Dec  3 16:08:48 np0005544708 ceph-mgr[80827]: mgr[py] Loading python module 'influx'
Dec  3 16:08:48 np0005544708 systemd[1]: var-lib-containers-storage-overlay-83e7aee2e28b5b5ec462036bf85d3ec63a0ad824fe680b12dd8dbfbd2baf9d2c-merged.mount: Deactivated successfully.
Dec  3 16:08:48 np0005544708 podman[81283]: 2025-12-03 21:08:48.900601518 +0000 UTC m=+0.233507961 container remove ce76797fc46dcb62480e2b286aefba767f975233028638155ef3b6e9a967912d (image=quay.io/ceph/ceph:v20, name=silly_yonath, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:08:48 np0005544708 systemd[1]: libpod-conmon-ce76797fc46dcb62480e2b286aefba767f975233028638155ef3b6e9a967912d.scope: Deactivated successfully.
Dec  3 16:08:48 np0005544708 ceph-mgr[80827]: mgr[py] Loading python module 'insights'
Dec  3 16:08:48 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:08:48 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:48 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:08:48 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:48 np0005544708 ceph-mgr[75500]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.jxauqt (unknown last config time)...
Dec  3 16:08:48 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.jxauqt (unknown last config time)...
Dec  3 16:08:48 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.jxauqt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec  3 16:08:48 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.jxauqt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec  3 16:08:48 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  3 16:08:48 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "mgr services"} : dispatch
Dec  3 16:08:48 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:08:48 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:08:48 np0005544708 ceph-mgr[75500]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.jxauqt on compute-0
Dec  3 16:08:48 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.jxauqt on compute-0
Dec  3 16:08:49 np0005544708 ceph-mgr[80827]: mgr[py] Loading python module 'iostat'
Dec  3 16:08:49 np0005544708 ceph-mgr[80827]: mgr[py] Loading python module 'k8sevents'
Dec  3 16:08:49 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 16:08:49 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Dec  3 16:08:49 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  3 16:08:49 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/88973863' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec  3 16:08:49 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Dec  3 16:08:49 np0005544708 frosty_jennings[81116]: set require_min_compat_client to mimic
Dec  3 16:08:49 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Dec  3 16:08:49 np0005544708 systemd[1]: libpod-c6105edeb663c8ea74899b28822ad662efc9d2b60daa7b55178f19d8f0dbe980.scope: Deactivated successfully.
Dec  3 16:08:49 np0005544708 podman[81087]: 2025-12-03 21:08:49.249726011 +0000 UTC m=+1.536498598 container died c6105edeb663c8ea74899b28822ad662efc9d2b60daa7b55178f19d8f0dbe980 (image=quay.io/ceph/ceph:v20, name=frosty_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:08:49 np0005544708 systemd[1]: var-lib-containers-storage-overlay-a926c7b4f2980dfffa2f0c19cf853686255e0a298843ed380ac5796969de547a-merged.mount: Deactivated successfully.
Dec  3 16:08:49 np0005544708 podman[81087]: 2025-12-03 21:08:49.287936168 +0000 UTC m=+1.574708755 container remove c6105edeb663c8ea74899b28822ad662efc9d2b60daa7b55178f19d8f0dbe980 (image=quay.io/ceph/ceph:v20, name=frosty_jennings, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec  3 16:08:49 np0005544708 systemd[1]: libpod-conmon-c6105edeb663c8ea74899b28822ad662efc9d2b60daa7b55178f19d8f0dbe980.scope: Deactivated successfully.
Dec  3 16:08:49 np0005544708 ceph-mon[75204]: Reconfiguring mon.compute-0 (unknown last config time)...
Dec  3 16:08:49 np0005544708 ceph-mon[75204]: Reconfiguring daemon mon.compute-0 on compute-0
Dec  3 16:08:49 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:49 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:49 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.jxauqt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec  3 16:08:49 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/88973863' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec  3 16:08:49 np0005544708 podman[81395]: 2025-12-03 21:08:49.382685192 +0000 UTC m=+0.040483572 container create b7f1ba6f374c2a01d5c1e125f274f6bb21a90173f4b4fc362acb68f1cc3f4b14 (image=quay.io/ceph/ceph:v20, name=laughing_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:08:49 np0005544708 systemd[1]: Started libpod-conmon-b7f1ba6f374c2a01d5c1e125f274f6bb21a90173f4b4fc362acb68f1cc3f4b14.scope.
Dec  3 16:08:49 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:49 np0005544708 podman[81395]: 2025-12-03 21:08:49.361899628 +0000 UTC m=+0.019698018 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:08:49 np0005544708 podman[81395]: 2025-12-03 21:08:49.46536512 +0000 UTC m=+0.123163530 container init b7f1ba6f374c2a01d5c1e125f274f6bb21a90173f4b4fc362acb68f1cc3f4b14 (image=quay.io/ceph/ceph:v20, name=laughing_meninsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec  3 16:08:49 np0005544708 ceph-mgr[80827]: mgr[py] Loading python module 'localpool'
Dec  3 16:08:49 np0005544708 podman[81395]: 2025-12-03 21:08:49.476034274 +0000 UTC m=+0.133832654 container start b7f1ba6f374c2a01d5c1e125f274f6bb21a90173f4b4fc362acb68f1cc3f4b14 (image=quay.io/ceph/ceph:v20, name=laughing_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:08:49 np0005544708 podman[81395]: 2025-12-03 21:08:49.479641743 +0000 UTC m=+0.137440133 container attach b7f1ba6f374c2a01d5c1e125f274f6bb21a90173f4b4fc362acb68f1cc3f4b14 (image=quay.io/ceph/ceph:v20, name=laughing_meninsky, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:08:49 np0005544708 laughing_meninsky[81411]: 167 167
Dec  3 16:08:49 np0005544708 systemd[1]: libpod-b7f1ba6f374c2a01d5c1e125f274f6bb21a90173f4b4fc362acb68f1cc3f4b14.scope: Deactivated successfully.
Dec  3 16:08:49 np0005544708 podman[81395]: 2025-12-03 21:08:49.482396051 +0000 UTC m=+0.140194451 container died b7f1ba6f374c2a01d5c1e125f274f6bb21a90173f4b4fc362acb68f1cc3f4b14 (image=quay.io/ceph/ceph:v20, name=laughing_meninsky, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:08:49 np0005544708 systemd[1]: var-lib-containers-storage-overlay-dcac2bb3d766ff155d011d0cee8b53a6be44d2f63d67d9c0b584d9d68efe6872-merged.mount: Deactivated successfully.
Dec  3 16:08:49 np0005544708 podman[81395]: 2025-12-03 21:08:49.521247003 +0000 UTC m=+0.179045383 container remove b7f1ba6f374c2a01d5c1e125f274f6bb21a90173f4b4fc362acb68f1cc3f4b14 (image=quay.io/ceph/ceph:v20, name=laughing_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:08:49 np0005544708 systemd[1]: libpod-conmon-b7f1ba6f374c2a01d5c1e125f274f6bb21a90173f4b4fc362acb68f1cc3f4b14.scope: Deactivated successfully.
Dec  3 16:08:49 np0005544708 ceph-mgr[80827]: mgr[py] Loading python module 'mds_autoscaler'
Dec  3 16:08:49 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:08:49 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:49 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:08:49 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:49 np0005544708 ceph-mgr[75500]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  3 16:08:49 np0005544708 ceph-mgr[80827]: mgr[py] Loading python module 'mirroring'
Dec  3 16:08:49 np0005544708 python3[81502]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid c21de27e-a7fd-594b-8324-0697ba9aab3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:08:49 np0005544708 ceph-mgr[80827]: mgr[py] Loading python module 'nfs'
Dec  3 16:08:50 np0005544708 podman[81505]: 2025-12-03 21:08:49.953333189 +0000 UTC m=+0.044439221 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:08:50 np0005544708 ceph-mgr[80827]: mgr[py] Loading python module 'orchestrator'
Dec  3 16:08:50 np0005544708 ceph-mgr[80827]: mgr[py] Loading python module 'osd_perf_query'
Dec  3 16:08:50 np0005544708 ceph-mgr[80827]: mgr[py] Loading python module 'osd_support'
Dec  3 16:08:50 np0005544708 ceph-mgr[80827]: mgr[py] Loading python module 'pg_autoscaler'
Dec  3 16:08:50 np0005544708 podman[81505]: 2025-12-03 21:08:50.647207467 +0000 UTC m=+0.738313439 container create 85e2bc82a5f18c21d7834cbd6f0c0d42e81b0fb249a6ba1fd384fe452906c87f (image=quay.io/ceph/ceph:v20, name=elated_moore, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:08:50 np0005544708 ceph-mon[75204]: Reconfiguring mgr.compute-0.jxauqt (unknown last config time)...
Dec  3 16:08:50 np0005544708 ceph-mon[75204]: Reconfiguring daemon mgr.compute-0.jxauqt on compute-0
Dec  3 16:08:50 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:50 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:50 np0005544708 ceph-mgr[80827]: mgr[py] Loading python module 'progress'
Dec  3 16:08:50 np0005544708 systemd[1]: Started libpod-conmon-85e2bc82a5f18c21d7834cbd6f0c0d42e81b0fb249a6ba1fd384fe452906c87f.scope.
Dec  3 16:08:50 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:50 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a707decd1ffea673dc464bd073abf3aa4105b05d4ccb4c86b0a15eb5cd26607/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:50 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a707decd1ffea673dc464bd073abf3aa4105b05d4ccb4c86b0a15eb5cd26607/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:50 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a707decd1ffea673dc464bd073abf3aa4105b05d4ccb4c86b0a15eb5cd26607/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:50 np0005544708 ceph-mgr[80827]: mgr[py] Loading python module 'prometheus'
Dec  3 16:08:50 np0005544708 podman[81505]: 2025-12-03 21:08:50.743026289 +0000 UTC m=+0.834132251 container init 85e2bc82a5f18c21d7834cbd6f0c0d42e81b0fb249a6ba1fd384fe452906c87f (image=quay.io/ceph/ceph:v20, name=elated_moore, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:08:50 np0005544708 podman[81505]: 2025-12-03 21:08:50.754450351 +0000 UTC m=+0.845556323 container start 85e2bc82a5f18c21d7834cbd6f0c0d42e81b0fb249a6ba1fd384fe452906c87f (image=quay.io/ceph/ceph:v20, name=elated_moore, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  3 16:08:50 np0005544708 podman[81505]: 2025-12-03 21:08:50.758588484 +0000 UTC m=+0.849694426 container attach 85e2bc82a5f18c21d7834cbd6f0c0d42e81b0fb249a6ba1fd384fe452906c87f (image=quay.io/ceph/ceph:v20, name=elated_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  3 16:08:50 np0005544708 podman[81564]: 2025-12-03 21:08:50.865788378 +0000 UTC m=+0.063920534 container exec 5be1cf87f4450717250955c63021ef41dba8c1099adb634f3e1f6a0b130fa916 (image=quay.io/ceph/ceph:v20, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mon-compute-0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  3 16:08:50 np0005544708 podman[81564]: 2025-12-03 21:08:50.982913778 +0000 UTC m=+0.181045884 container exec_died 5be1cf87f4450717250955c63021ef41dba8c1099adb634f3e1f6a0b130fa916 (image=quay.io/ceph/ceph:v20, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mon-compute-0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:08:51 np0005544708 ceph-mgr[80827]: mgr[py] Loading python module 'rbd_support'
Dec  3 16:08:51 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 16:08:51 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:08:51 np0005544708 ceph-mgr[80827]: mgr[py] Loading python module 'rgw'
Dec  3 16:08:51 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:08:51 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:51 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:08:51 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:51 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:08:51 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:08:51 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  3 16:08:51 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:08:51 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  3 16:08:51 np0005544708 ceph-mgr[80827]: mgr[py] Loading python module 'rook'
Dec  3 16:08:51 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:51 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  3 16:08:51 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:51 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  3 16:08:51 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:51 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  3 16:08:51 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:51 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  3 16:08:51 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:08:51 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:51 np0005544708 ceph-mgr[75500]: [cephadm INFO root] Added host compute-0
Dec  3 16:08:51 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : Added host compute-0
Dec  3 16:08:51 np0005544708 ceph-mgr[75500]: [cephadm INFO root] Saving service mon spec with placement compute-0
Dec  3 16:08:51 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Dec  3 16:08:51 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:08:51 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:08:51 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec  3 16:08:51 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  3 16:08:51 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:08:51 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:51 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  3 16:08:51 np0005544708 ceph-mgr[75500]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Dec  3 16:08:51 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Dec  3 16:08:51 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  3 16:08:51 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:51 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec  3 16:08:51 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:51 np0005544708 ceph-mgr[75500]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Dec  3 16:08:51 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Dec  3 16:08:51 np0005544708 ceph-mgr[75500]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Dec  3 16:08:51 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Dec  3 16:08:51 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Dec  3 16:08:51 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:51 np0005544708 ceph-mgr[75500]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  3 16:08:51 np0005544708 ceph-mgr[75500]: [progress INFO root] update: starting ev 1713cf37-6938-4075-be59-f0a2a6a264e6 (Updating mgr deployment (-1 -> 1))
Dec  3 16:08:51 np0005544708 ceph-mgr[75500]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.jdapcy from compute-0 -- ports [8765]
Dec  3 16:08:51 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.jdapcy from compute-0 -- ports [8765]
Dec  3 16:08:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:08:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:08:51 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:51 np0005544708 elated_moore[81546]: Added host 'compute-0' with addr '192.168.122.100'
Dec  3 16:08:51 np0005544708 elated_moore[81546]: Scheduled mon update...
Dec  3 16:08:51 np0005544708 elated_moore[81546]: Scheduled mgr update...
Dec  3 16:08:51 np0005544708 elated_moore[81546]: Scheduled osd.default_drive_group update...
Dec  3 16:08:51 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  3 16:08:51 np0005544708 ceph-mgr[75500]: [progress INFO root] Writing back 2 completed events
Dec  3 16:08:51 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:08:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:08:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:08:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:08:51 np0005544708 systemd[1]: libpod-85e2bc82a5f18c21d7834cbd6f0c0d42e81b0fb249a6ba1fd384fe452906c87f.scope: Deactivated successfully.
Dec  3 16:08:51 np0005544708 podman[81803]: 2025-12-03 21:08:51.823299622 +0000 UTC m=+0.029686696 container died 85e2bc82a5f18c21d7834cbd6f0c0d42e81b0fb249a6ba1fd384fe452906c87f (image=quay.io/ceph/ceph:v20, name=elated_moore, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  3 16:08:51 np0005544708 systemd[1]: var-lib-containers-storage-overlay-0a707decd1ffea673dc464bd073abf3aa4105b05d4ccb4c86b0a15eb5cd26607-merged.mount: Deactivated successfully.
Dec  3 16:08:51 np0005544708 podman[81803]: 2025-12-03 21:08:51.871367822 +0000 UTC m=+0.077754896 container remove 85e2bc82a5f18c21d7834cbd6f0c0d42e81b0fb249a6ba1fd384fe452906c87f (image=quay.io/ceph/ceph:v20, name=elated_moore, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 16:08:51 np0005544708 systemd[1]: libpod-conmon-85e2bc82a5f18c21d7834cbd6f0c0d42e81b0fb249a6ba1fd384fe452906c87f.scope: Deactivated successfully.
Dec  3 16:08:52 np0005544708 ceph-mgr[80827]: mgr[py] Loading python module 'selftest'
Dec  3 16:08:52 np0005544708 ceph-mgr[80827]: mgr[py] Loading python module 'smb'
Dec  3 16:08:52 np0005544708 systemd[1]: Stopping Ceph mgr.compute-0.jdapcy for c21de27e-a7fd-594b-8324-0697ba9aab3a...
Dec  3 16:08:52 np0005544708 python3[81895]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid c21de27e-a7fd-594b-8324-0697ba9aab3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:08:52 np0005544708 ceph-mgr[80827]: mgr[py] Loading python module 'snap_schedule'
Dec  3 16:08:52 np0005544708 podman[81918]: 2025-12-03 21:08:52.446778476 +0000 UTC m=+0.081896138 container create 0c2ab80786eb651df95ad062510362b289311eca979d4306cf2fa024bbb1d574 (image=quay.io/ceph/ceph:v20, name=pensive_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:08:52 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:52 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:52 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:08:52 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:52 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:52 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:52 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:52 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:52 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:08:52 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:52 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:52 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:52 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:52 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:52 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:52 np0005544708 ceph-mgr[80827]: mgr[py] Loading python module 'stats'
Dec  3 16:08:52 np0005544708 podman[81918]: 2025-12-03 21:08:52.411911603 +0000 UTC m=+0.047029305 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:08:52 np0005544708 systemd[1]: Started libpod-conmon-0c2ab80786eb651df95ad062510362b289311eca979d4306cf2fa024bbb1d574.scope.
Dec  3 16:08:52 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:52 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a81441c14d6d1f3a925ead8e2ee0305dddff9ca4b57890b30177026cf94ad6f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:52 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a81441c14d6d1f3a925ead8e2ee0305dddff9ca4b57890b30177026cf94ad6f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:52 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a81441c14d6d1f3a925ead8e2ee0305dddff9ca4b57890b30177026cf94ad6f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:52 np0005544708 podman[81918]: 2025-12-03 21:08:52.570444597 +0000 UTC m=+0.205562239 container init 0c2ab80786eb651df95ad062510362b289311eca979d4306cf2fa024bbb1d574 (image=quay.io/ceph/ceph:v20, name=pensive_thompson, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 16:08:52 np0005544708 podman[81939]: 2025-12-03 21:08:52.578886776 +0000 UTC m=+0.146759024 container died 70b4c07d62442eeb897b819626352cfaa1be138f8695a20ab1ca8ed143214483 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mgr-compute-0-jdapcy, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True)
Dec  3 16:08:52 np0005544708 podman[81918]: 2025-12-03 21:08:52.58222608 +0000 UTC m=+0.217343742 container start 0c2ab80786eb651df95ad062510362b289311eca979d4306cf2fa024bbb1d574 (image=quay.io/ceph/ceph:v20, name=pensive_thompson, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 16:08:52 np0005544708 podman[81918]: 2025-12-03 21:08:52.594840941 +0000 UTC m=+0.229958563 container attach 0c2ab80786eb651df95ad062510362b289311eca979d4306cf2fa024bbb1d574 (image=quay.io/ceph/ceph:v20, name=pensive_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec  3 16:08:52 np0005544708 systemd[1]: var-lib-containers-storage-overlay-4fed3044219a64be64de93285b6cd8db400e3b0cdcae1095ce876c7b38efed9d-merged.mount: Deactivated successfully.
Dec  3 16:08:52 np0005544708 podman[81939]: 2025-12-03 21:08:52.636490172 +0000 UTC m=+0.204362420 container remove 70b4c07d62442eeb897b819626352cfaa1be138f8695a20ab1ca8ed143214483 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mgr-compute-0-jdapcy, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec  3 16:08:52 np0005544708 bash[81939]: ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mgr-compute-0-jdapcy
Dec  3 16:08:52 np0005544708 systemd[1]: ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a@mgr.compute-0.jdapcy.service: Main process exited, code=exited, status=143/n/a
Dec  3 16:08:52 np0005544708 systemd[1]: ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a@mgr.compute-0.jdapcy.service: Failed with result 'exit-code'.
Dec  3 16:08:52 np0005544708 systemd[1]: Stopped Ceph mgr.compute-0.jdapcy for c21de27e-a7fd-594b-8324-0697ba9aab3a.
Dec  3 16:08:52 np0005544708 systemd[1]: ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a@mgr.compute-0.jdapcy.service: Consumed 6.880s CPU time, 384.9M memory peak, read 0B from disk, written 216.0K to disk.
Dec  3 16:08:52 np0005544708 systemd[1]: Reloading.
Dec  3 16:08:52 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:08:52 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:08:53 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec  3 16:08:53 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2135877147' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec  3 16:08:53 np0005544708 pensive_thompson[81956]: 
Dec  3 16:08:53 np0005544708 pensive_thompson[81956]: {"fsid":"c21de27e-a7fd-594b-8324-0697ba9aab3a","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":51,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-12-03T21:07:59:373870+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-12-03T21:07:59.377140+0000","services":{}},"progress_events":{}}
Dec  3 16:08:53 np0005544708 systemd[1]: libpod-0c2ab80786eb651df95ad062510362b289311eca979d4306cf2fa024bbb1d574.scope: Deactivated successfully.
Dec  3 16:08:53 np0005544708 podman[81918]: 2025-12-03 21:08:53.094458529 +0000 UTC m=+0.729576191 container died 0c2ab80786eb651df95ad062510362b289311eca979d4306cf2fa024bbb1d574 (image=quay.io/ceph/ceph:v20, name=pensive_thompson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  3 16:08:53 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 16:08:53 np0005544708 systemd[1]: var-lib-containers-storage-overlay-3a81441c14d6d1f3a925ead8e2ee0305dddff9ca4b57890b30177026cf94ad6f-merged.mount: Deactivated successfully.
Dec  3 16:08:53 np0005544708 ceph-mgr[75500]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.jdapcy
Dec  3 16:08:53 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.jdapcy
Dec  3 16:08:53 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.jdapcy"} v 0)
Dec  3 16:08:53 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.jdapcy"} : dispatch
Dec  3 16:08:53 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.jdapcy"}]': finished
Dec  3 16:08:53 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  3 16:08:53 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:53 np0005544708 ceph-mgr[75500]: [progress INFO root] complete: finished ev 1713cf37-6938-4075-be59-f0a2a6a264e6 (Updating mgr deployment (-1 -> 1))
Dec  3 16:08:53 np0005544708 ceph-mgr[75500]: [progress INFO root] Completed event 1713cf37-6938-4075-be59-f0a2a6a264e6 (Updating mgr deployment (-1 -> 1)) in 1 seconds
Dec  3 16:08:53 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  3 16:08:53 np0005544708 podman[81918]: 2025-12-03 21:08:53.158149736 +0000 UTC m=+0.793267358 container remove 0c2ab80786eb651df95ad062510362b289311eca979d4306cf2fa024bbb1d574 (image=quay.io/ceph/ceph:v20, name=pensive_thompson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:08:53 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:53 np0005544708 systemd[1]: libpod-conmon-0c2ab80786eb651df95ad062510362b289311eca979d4306cf2fa024bbb1d574.scope: Deactivated successfully.
Dec  3 16:08:53 np0005544708 ceph-mon[75204]: Added host compute-0
Dec  3 16:08:53 np0005544708 ceph-mon[75204]: Saving service mon spec with placement compute-0
Dec  3 16:08:53 np0005544708 ceph-mon[75204]: Saving service mgr spec with placement compute-0
Dec  3 16:08:53 np0005544708 ceph-mon[75204]: Marking host: compute-0 for OSDSpec preview refresh.
Dec  3 16:08:53 np0005544708 ceph-mon[75204]: Saving service osd.default_drive_group spec with placement compute-0
Dec  3 16:08:53 np0005544708 ceph-mon[75204]: Removing daemon mgr.compute-0.jdapcy from compute-0 -- ports [8765]
Dec  3 16:08:53 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.jdapcy"} : dispatch
Dec  3 16:08:53 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.jdapcy"}]': finished
Dec  3 16:08:53 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:53 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:53 np0005544708 ceph-mgr[75500]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  3 16:08:53 np0005544708 podman[82192]: 2025-12-03 21:08:53.86317252 +0000 UTC m=+0.087104958 container exec 5be1cf87f4450717250955c63021ef41dba8c1099adb634f3e1f6a0b130fa916 (image=quay.io/ceph/ceph:v20, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mon-compute-0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:08:53 np0005544708 podman[82192]: 2025-12-03 21:08:53.955011393 +0000 UTC m=+0.178943801 container exec_died 5be1cf87f4450717250955c63021ef41dba8c1099adb634f3e1f6a0b130fa916 (image=quay.io/ceph/ceph:v20, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mon-compute-0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 16:08:54 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:08:54 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:54 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:08:54 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:54 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:08:54 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:54 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:08:54 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:54 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:08:54 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:08:54 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  3 16:08:54 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:08:54 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  3 16:08:54 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:54 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  3 16:08:54 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  3 16:08:54 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  3 16:08:54 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:08:54 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:08:54 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:08:54 np0005544708 ceph-mon[75204]: Removing key for mgr.compute-0.jdapcy
Dec  3 16:08:54 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:54 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:54 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:54 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:54 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:08:54 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:54 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:08:54 np0005544708 podman[82351]: 2025-12-03 21:08:54.913794588 +0000 UTC m=+0.059076773 container create 5ef7e690cffb46db45e18ef861bca85ff01b1193ddae4eee257f63fc6f1ed8b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True)
Dec  3 16:08:54 np0005544708 systemd[1]: Started libpod-conmon-5ef7e690cffb46db45e18ef861bca85ff01b1193ddae4eee257f63fc6f1ed8b6.scope.
Dec  3 16:08:54 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:54 np0005544708 podman[82351]: 2025-12-03 21:08:54.892004578 +0000 UTC m=+0.037286773 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:08:54 np0005544708 podman[82351]: 2025-12-03 21:08:54.990289102 +0000 UTC m=+0.135571327 container init 5ef7e690cffb46db45e18ef861bca85ff01b1193ddae4eee257f63fc6f1ed8b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_chaplygin, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:08:54 np0005544708 podman[82351]: 2025-12-03 21:08:54.999718565 +0000 UTC m=+0.145000750 container start 5ef7e690cffb46db45e18ef861bca85ff01b1193ddae4eee257f63fc6f1ed8b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:08:55 np0005544708 podman[82351]: 2025-12-03 21:08:55.004078953 +0000 UTC m=+0.149361198 container attach 5ef7e690cffb46db45e18ef861bca85ff01b1193ddae4eee257f63fc6f1ed8b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  3 16:08:55 np0005544708 affectionate_chaplygin[82367]: 167 167
Dec  3 16:08:55 np0005544708 systemd[1]: libpod-5ef7e690cffb46db45e18ef861bca85ff01b1193ddae4eee257f63fc6f1ed8b6.scope: Deactivated successfully.
Dec  3 16:08:55 np0005544708 podman[82351]: 2025-12-03 21:08:55.006065232 +0000 UTC m=+0.151347418 container died 5ef7e690cffb46db45e18ef861bca85ff01b1193ddae4eee257f63fc6f1ed8b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_chaplygin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 16:08:55 np0005544708 systemd[1]: var-lib-containers-storage-overlay-e6788c731b1be06689ac1472f19d689b11bf8dcc76c22a95b8dfe354d10a55e0-merged.mount: Deactivated successfully.
Dec  3 16:08:55 np0005544708 podman[82351]: 2025-12-03 21:08:55.057346392 +0000 UTC m=+0.202628547 container remove 5ef7e690cffb46db45e18ef861bca85ff01b1193ddae4eee257f63fc6f1ed8b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 16:08:55 np0005544708 systemd[1]: libpod-conmon-5ef7e690cffb46db45e18ef861bca85ff01b1193ddae4eee257f63fc6f1ed8b6.scope: Deactivated successfully.
Dec  3 16:08:55 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 16:08:55 np0005544708 podman[82390]: 2025-12-03 21:08:55.217060415 +0000 UTC m=+0.042363229 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:08:55 np0005544708 podman[82390]: 2025-12-03 21:08:55.32225473 +0000 UTC m=+0.147557464 container create 35dda402f047d2f7d6f80c4624a429643b78d5c45fd19618790fd66ea3c7f2df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_mestorf, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:08:55 np0005544708 systemd[1]: Started libpod-conmon-35dda402f047d2f7d6f80c4624a429643b78d5c45fd19618790fd66ea3c7f2df.scope.
Dec  3 16:08:55 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:08:55 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24f6de2fc2d08bcf56932009a440743ea8001b2a3fbd9773414d25e9b8a2e3bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:55 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24f6de2fc2d08bcf56932009a440743ea8001b2a3fbd9773414d25e9b8a2e3bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:55 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24f6de2fc2d08bcf56932009a440743ea8001b2a3fbd9773414d25e9b8a2e3bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:55 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24f6de2fc2d08bcf56932009a440743ea8001b2a3fbd9773414d25e9b8a2e3bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:55 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24f6de2fc2d08bcf56932009a440743ea8001b2a3fbd9773414d25e9b8a2e3bc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:08:55 np0005544708 podman[82390]: 2025-12-03 21:08:55.464010079 +0000 UTC m=+0.289312833 container init 35dda402f047d2f7d6f80c4624a429643b78d5c45fd19618790fd66ea3c7f2df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_mestorf, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:08:55 np0005544708 podman[82390]: 2025-12-03 21:08:55.477925673 +0000 UTC m=+0.303228437 container start 35dda402f047d2f7d6f80c4624a429643b78d5c45fd19618790fd66ea3c7f2df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_mestorf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:08:55 np0005544708 podman[82390]: 2025-12-03 21:08:55.482013795 +0000 UTC m=+0.307316549 container attach 35dda402f047d2f7d6f80c4624a429643b78d5c45fd19618790fd66ea3c7f2df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec  3 16:08:55 np0005544708 ceph-mgr[75500]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  3 16:08:56 np0005544708 sleepy_mestorf[82406]: --> passed data devices: 0 physical, 3 LVM
Dec  3 16:08:56 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 16:08:56 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 16:08:56 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 4d33bc95-baf8-481d-bc78-3b15ffd29872
Dec  3 16:08:56 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:08:56 np0005544708 ceph-mgr[75500]: [progress INFO root] Writing back 3 completed events
Dec  3 16:08:56 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  3 16:08:56 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:56 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "4d33bc95-baf8-481d-bc78-3b15ffd29872"} v 0)
Dec  3 16:08:56 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1335551350' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "4d33bc95-baf8-481d-bc78-3b15ffd29872"} : dispatch
Dec  3 16:08:56 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Dec  3 16:08:56 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  3 16:08:56 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1335551350' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4d33bc95-baf8-481d-bc78-3b15ffd29872"}]': finished
Dec  3 16:08:56 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Dec  3 16:08:56 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Dec  3 16:08:56 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  3 16:08:56 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec  3 16:08:56 np0005544708 ceph-mgr[75500]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  3 16:08:56 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Dec  3 16:08:56 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Dec  3 16:08:56 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  3 16:08:56 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec  3 16:08:56 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Dec  3 16:08:57 np0005544708 lvm[82500]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:08:57 np0005544708 lvm[82500]: VG ceph_vg0 finished
Dec  3 16:08:57 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 16:08:57 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec  3 16:08:57 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3432163934' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec  3 16:08:57 np0005544708 sleepy_mestorf[82406]: stderr: got monmap epoch 1
Dec  3 16:08:57 np0005544708 sleepy_mestorf[82406]: --> Creating keyring file for osd.0
Dec  3 16:08:57 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Dec  3 16:08:57 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Dec  3 16:08:57 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 4d33bc95-baf8-481d-bc78-3b15ffd29872 --setuser ceph --setgroup ceph
Dec  3 16:08:57 np0005544708 ceph-mgr[75500]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  3 16:08:57 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:08:57 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/1335551350' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "4d33bc95-baf8-481d-bc78-3b15ffd29872"} : dispatch
Dec  3 16:08:57 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/1335551350' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4d33bc95-baf8-481d-bc78-3b15ffd29872"}]': finished
Dec  3 16:08:57 np0005544708 ceph-mon[75204]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec  3 16:08:57 np0005544708 ceph-mon[75204]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec  3 16:08:58 np0005544708 sleepy_mestorf[82406]: stderr: 2025-12-03T21:08:57.641+0000 7f1ad02228c0 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Dec  3 16:08:58 np0005544708 sleepy_mestorf[82406]: stderr: 2025-12-03T21:08:57.661+0000 7f1ad02228c0 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Dec  3 16:08:58 np0005544708 sleepy_mestorf[82406]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Dec  3 16:08:58 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  3 16:08:58 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec  3 16:08:58 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec  3 16:08:58 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec  3 16:08:58 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  3 16:08:58 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  3 16:08:58 np0005544708 sleepy_mestorf[82406]: --> ceph-volume lvm activate successful for osd ID: 0
Dec  3 16:08:58 np0005544708 sleepy_mestorf[82406]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Dec  3 16:08:58 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 16:08:58 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 16:08:58 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new c4086f1b-ff53-4e63-8dc0-011238d77976
Dec  3 16:08:58 np0005544708 ceph-mon[75204]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec  3 16:08:58 np0005544708 ceph-mon[75204]: Cluster is now healthy
Dec  3 16:08:59 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 16:08:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "c4086f1b-ff53-4e63-8dc0-011238d77976"} v 0)
Dec  3 16:08:59 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/69530890' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "c4086f1b-ff53-4e63-8dc0-011238d77976"} : dispatch
Dec  3 16:08:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Dec  3 16:08:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  3 16:08:59 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/69530890' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c4086f1b-ff53-4e63-8dc0-011238d77976"}]': finished
Dec  3 16:08:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Dec  3 16:08:59 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Dec  3 16:08:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  3 16:08:59 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec  3 16:08:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  3 16:08:59 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec  3 16:08:59 np0005544708 ceph-mgr[75500]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  3 16:08:59 np0005544708 ceph-mgr[75500]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  3 16:08:59 np0005544708 lvm[83449]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:08:59 np0005544708 lvm[83449]: VG ceph_vg1 finished
Dec  3 16:08:59 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Dec  3 16:08:59 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Dec  3 16:08:59 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec  3 16:08:59 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec  3 16:08:59 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Dec  3 16:08:59 np0005544708 ceph-mgr[75500]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  3 16:08:59 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/69530890' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "c4086f1b-ff53-4e63-8dc0-011238d77976"} : dispatch
Dec  3 16:08:59 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/69530890' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c4086f1b-ff53-4e63-8dc0-011238d77976"}]': finished
Dec  3 16:08:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec  3 16:08:59 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/112235096' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec  3 16:08:59 np0005544708 sleepy_mestorf[82406]: stderr: got monmap epoch 1
Dec  3 16:08:59 np0005544708 sleepy_mestorf[82406]: --> Creating keyring file for osd.1
Dec  3 16:08:59 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Dec  3 16:08:59 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Dec  3 16:08:59 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid c4086f1b-ff53-4e63-8dc0-011238d77976 --setuser ceph --setgroup ceph
Dec  3 16:09:00 np0005544708 sleepy_mestorf[82406]: stderr: 2025-12-03T21:08:59.933+0000 7f32fb9af8c0 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Dec  3 16:09:00 np0005544708 sleepy_mestorf[82406]: stderr: 2025-12-03T21:08:59.958+0000 7f32fb9af8c0 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Dec  3 16:09:00 np0005544708 sleepy_mestorf[82406]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Dec  3 16:09:00 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  3 16:09:00 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec  3 16:09:00 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec  3 16:09:00 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec  3 16:09:00 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec  3 16:09:00 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  3 16:09:00 np0005544708 sleepy_mestorf[82406]: --> ceph-volume lvm activate successful for osd ID: 1
Dec  3 16:09:00 np0005544708 sleepy_mestorf[82406]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Dec  3 16:09:00 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 16:09:01 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 16:09:01 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new abcd6a67-9013-4470-978f-f75da5f33cd4
Dec  3 16:09:01 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 16:09:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "abcd6a67-9013-4470-978f-f75da5f33cd4"} v 0)
Dec  3 16:09:01 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4215295916' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "abcd6a67-9013-4470-978f-f75da5f33cd4"} : dispatch
Dec  3 16:09:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Dec  3 16:09:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  3 16:09:01 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4215295916' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "abcd6a67-9013-4470-978f-f75da5f33cd4"}]': finished
Dec  3 16:09:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Dec  3 16:09:01 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Dec  3 16:09:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  3 16:09:01 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec  3 16:09:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  3 16:09:01 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec  3 16:09:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  3 16:09:01 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec  3 16:09:01 np0005544708 ceph-mgr[75500]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  3 16:09:01 np0005544708 ceph-mgr[75500]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  3 16:09:01 np0005544708 ceph-mgr[75500]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 16:09:01 np0005544708 lvm[84398]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:09:01 np0005544708 lvm[84398]: VG ceph_vg2 finished
Dec  3 16:09:01 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Dec  3 16:09:01 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Dec  3 16:09:01 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec  3 16:09:01 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec  3 16:09:01 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Dec  3 16:09:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:09:01 np0005544708 ceph-mgr[75500]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  3 16:09:01 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/4215295916' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "abcd6a67-9013-4470-978f-f75da5f33cd4"} : dispatch
Dec  3 16:09:01 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/4215295916' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "abcd6a67-9013-4470-978f-f75da5f33cd4"}]': finished
Dec  3 16:09:02 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec  3 16:09:02 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1536442231' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec  3 16:09:02 np0005544708 sleepy_mestorf[82406]: stderr: got monmap epoch 1
Dec  3 16:09:02 np0005544708 sleepy_mestorf[82406]: --> Creating keyring file for osd.2
Dec  3 16:09:02 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Dec  3 16:09:02 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Dec  3 16:09:02 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid abcd6a67-9013-4470-978f-f75da5f33cd4 --setuser ceph --setgroup ceph
Dec  3 16:09:03 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 16:09:03 np0005544708 sleepy_mestorf[82406]: stderr: 2025-12-03T21:09:02.309+0000 7f4160f628c0 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) No valid bdev label found
Dec  3 16:09:03 np0005544708 sleepy_mestorf[82406]: stderr: 2025-12-03T21:09:02.325+0000 7f4160f628c0 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Dec  3 16:09:03 np0005544708 sleepy_mestorf[82406]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Dec  3 16:09:03 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  3 16:09:03 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Dec  3 16:09:03 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec  3 16:09:03 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Dec  3 16:09:03 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec  3 16:09:03 np0005544708 sleepy_mestorf[82406]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  3 16:09:03 np0005544708 sleepy_mestorf[82406]: --> ceph-volume lvm activate successful for osd ID: 2
Dec  3 16:09:03 np0005544708 sleepy_mestorf[82406]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Dec  3 16:09:03 np0005544708 systemd[1]: libpod-35dda402f047d2f7d6f80c4624a429643b78d5c45fd19618790fd66ea3c7f2df.scope: Deactivated successfully.
Dec  3 16:09:03 np0005544708 systemd[1]: libpod-35dda402f047d2f7d6f80c4624a429643b78d5c45fd19618790fd66ea3c7f2df.scope: Consumed 6.485s CPU time.
Dec  3 16:09:03 np0005544708 podman[85314]: 2025-12-03 21:09:03.39882178 +0000 UTC m=+0.035288055 container died 35dda402f047d2f7d6f80c4624a429643b78d5c45fd19618790fd66ea3c7f2df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 16:09:03 np0005544708 systemd[1]: var-lib-containers-storage-overlay-24f6de2fc2d08bcf56932009a440743ea8001b2a3fbd9773414d25e9b8a2e3bc-merged.mount: Deactivated successfully.
Dec  3 16:09:03 np0005544708 podman[85314]: 2025-12-03 21:09:03.445650058 +0000 UTC m=+0.082116283 container remove 35dda402f047d2f7d6f80c4624a429643b78d5c45fd19618790fd66ea3c7f2df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_mestorf, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  3 16:09:03 np0005544708 systemd[1]: libpod-conmon-35dda402f047d2f7d6f80c4624a429643b78d5c45fd19618790fd66ea3c7f2df.scope: Deactivated successfully.
Dec  3 16:09:03 np0005544708 ceph-mgr[75500]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  3 16:09:03 np0005544708 podman[85391]: 2025-12-03 21:09:03.994322121 +0000 UTC m=+0.044006360 container create 67a4bf83ed5b553d4b0b43f0835e9895d2ab8b7ae068491d23cedae5b4024609 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:09:04 np0005544708 systemd[1]: Started libpod-conmon-67a4bf83ed5b553d4b0b43f0835e9895d2ab8b7ae068491d23cedae5b4024609.scope.
Dec  3 16:09:04 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:04 np0005544708 podman[85391]: 2025-12-03 21:09:03.977260239 +0000 UTC m=+0.026944498 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:09:04 np0005544708 podman[85391]: 2025-12-03 21:09:04.074056535 +0000 UTC m=+0.123740835 container init 67a4bf83ed5b553d4b0b43f0835e9895d2ab8b7ae068491d23cedae5b4024609 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec  3 16:09:04 np0005544708 podman[85391]: 2025-12-03 21:09:04.090156843 +0000 UTC m=+0.139841092 container start 67a4bf83ed5b553d4b0b43f0835e9895d2ab8b7ae068491d23cedae5b4024609 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec  3 16:09:04 np0005544708 podman[85391]: 2025-12-03 21:09:04.095026484 +0000 UTC m=+0.144711413 container attach 67a4bf83ed5b553d4b0b43f0835e9895d2ab8b7ae068491d23cedae5b4024609 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_ride, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:09:04 np0005544708 goofy_ride[85407]: 167 167
Dec  3 16:09:04 np0005544708 systemd[1]: libpod-67a4bf83ed5b553d4b0b43f0835e9895d2ab8b7ae068491d23cedae5b4024609.scope: Deactivated successfully.
Dec  3 16:09:04 np0005544708 podman[85391]: 2025-12-03 21:09:04.097616978 +0000 UTC m=+0.147301267 container died 67a4bf83ed5b553d4b0b43f0835e9895d2ab8b7ae068491d23cedae5b4024609 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:09:04 np0005544708 systemd[1]: var-lib-containers-storage-overlay-b9a0e8c9dba25b3b4f70493f4e0a44564ba5a88f441f0ac80a354991da530240-merged.mount: Deactivated successfully.
Dec  3 16:09:04 np0005544708 podman[85391]: 2025-12-03 21:09:04.140183182 +0000 UTC m=+0.189867441 container remove 67a4bf83ed5b553d4b0b43f0835e9895d2ab8b7ae068491d23cedae5b4024609 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec  3 16:09:04 np0005544708 systemd[1]: libpod-conmon-67a4bf83ed5b553d4b0b43f0835e9895d2ab8b7ae068491d23cedae5b4024609.scope: Deactivated successfully.
Dec  3 16:09:04 np0005544708 podman[85431]: 2025-12-03 21:09:04.374182355 +0000 UTC m=+0.069132163 container create 804c274abe538c25d709c5d28deafbfe8b9559f094847d2c6f590bf62c7213ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  3 16:09:04 np0005544708 systemd[1]: Started libpod-conmon-804c274abe538c25d709c5d28deafbfe8b9559f094847d2c6f590bf62c7213ca.scope.
Dec  3 16:09:04 np0005544708 podman[85431]: 2025-12-03 21:09:04.347441993 +0000 UTC m=+0.042391871 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:09:04 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:04 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f01e3c07af433cd59fbeb9369bff6d1c06b08f15d26e30047fc3c3da596f959/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:04 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f01e3c07af433cd59fbeb9369bff6d1c06b08f15d26e30047fc3c3da596f959/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:04 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f01e3c07af433cd59fbeb9369bff6d1c06b08f15d26e30047fc3c3da596f959/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:04 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f01e3c07af433cd59fbeb9369bff6d1c06b08f15d26e30047fc3c3da596f959/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:04 np0005544708 podman[85431]: 2025-12-03 21:09:04.472757365 +0000 UTC m=+0.167707253 container init 804c274abe538c25d709c5d28deafbfe8b9559f094847d2c6f590bf62c7213ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_rhodes, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  3 16:09:04 np0005544708 podman[85431]: 2025-12-03 21:09:04.487268434 +0000 UTC m=+0.182218262 container start 804c274abe538c25d709c5d28deafbfe8b9559f094847d2c6f590bf62c7213ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  3 16:09:04 np0005544708 podman[85431]: 2025-12-03 21:09:04.491706425 +0000 UTC m=+0.186656283 container attach 804c274abe538c25d709c5d28deafbfe8b9559f094847d2c6f590bf62c7213ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_rhodes, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]: {
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:    "0": [
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:        {
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:            "devices": [
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "/dev/loop3"
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:            ],
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:            "lv_name": "ceph_lv0",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:            "lv_size": "21470642176",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d33bc95-baf8-481d-bc78-3b15ffd29872,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:            "lv_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:            "name": "ceph_lv0",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:            "tags": {
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.block_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.cluster_name": "ceph",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.crush_device_class": "",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.encrypted": "0",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.objectstore": "bluestore",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.osd_fsid": "4d33bc95-baf8-481d-bc78-3b15ffd29872",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.osd_id": "0",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.type": "block",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.vdo": "0",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.with_tpm": "0"
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:            },
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:            "type": "block",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:            "vg_name": "ceph_vg0"
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:        }
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:    ],
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:    "1": [
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:        {
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:            "devices": [
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "/dev/loop4"
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:            ],
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:            "lv_name": "ceph_lv1",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:            "lv_size": "21470642176",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c4086f1b-ff53-4e63-8dc0-011238d77976,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:            "lv_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:            "name": "ceph_lv1",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:            "tags": {
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.block_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.cluster_name": "ceph",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.crush_device_class": "",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.encrypted": "0",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.objectstore": "bluestore",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.osd_fsid": "c4086f1b-ff53-4e63-8dc0-011238d77976",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.osd_id": "1",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.type": "block",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.vdo": "0",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.with_tpm": "0"
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:            },
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:            "type": "block",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:            "vg_name": "ceph_vg1"
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:        }
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:    ],
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:    "2": [
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:        {
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:            "devices": [
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "/dev/loop5"
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:            ],
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:            "lv_name": "ceph_lv2",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:            "lv_size": "21470642176",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=abcd6a67-9013-4470-978f-f75da5f33cd4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:            "lv_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:            "name": "ceph_lv2",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:            "tags": {
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.block_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.cluster_name": "ceph",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.crush_device_class": "",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.encrypted": "0",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.objectstore": "bluestore",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.osd_fsid": "abcd6a67-9013-4470-978f-f75da5f33cd4",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.osd_id": "2",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.type": "block",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.vdo": "0",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:                "ceph.with_tpm": "0"
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:            },
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:            "type": "block",
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:            "vg_name": "ceph_vg2"
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:        }
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]:    ]
Dec  3 16:09:04 np0005544708 objective_rhodes[85448]: }
Dec  3 16:09:04 np0005544708 systemd[1]: libpod-804c274abe538c25d709c5d28deafbfe8b9559f094847d2c6f590bf62c7213ca.scope: Deactivated successfully.
Dec  3 16:09:04 np0005544708 podman[85431]: 2025-12-03 21:09:04.841150455 +0000 UTC m=+0.536100283 container died 804c274abe538c25d709c5d28deafbfe8b9559f094847d2c6f590bf62c7213ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_rhodes, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec  3 16:09:04 np0005544708 systemd[1]: var-lib-containers-storage-overlay-0f01e3c07af433cd59fbeb9369bff6d1c06b08f15d26e30047fc3c3da596f959-merged.mount: Deactivated successfully.
Dec  3 16:09:04 np0005544708 podman[85431]: 2025-12-03 21:09:04.892848225 +0000 UTC m=+0.587798023 container remove 804c274abe538c25d709c5d28deafbfe8b9559f094847d2c6f590bf62c7213ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_rhodes, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:09:04 np0005544708 systemd[1]: libpod-conmon-804c274abe538c25d709c5d28deafbfe8b9559f094847d2c6f590bf62c7213ca.scope: Deactivated successfully.
Dec  3 16:09:04 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Dec  3 16:09:04 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec  3 16:09:04 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:09:04 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:09:04 np0005544708 ceph-mgr[75500]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Dec  3 16:09:04 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Dec  3 16:09:05 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec  3 16:09:05 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 16:09:05 np0005544708 podman[85561]: 2025-12-03 21:09:05.671982744 +0000 UTC m=+0.059980796 container create b3bacbb6bd5d2ac90c3ddef9f3b5d47c83f4db7dfc94823a582db0276a2528c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_napier, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec  3 16:09:05 np0005544708 systemd[1]: Started libpod-conmon-b3bacbb6bd5d2ac90c3ddef9f3b5d47c83f4db7dfc94823a582db0276a2528c0.scope.
Dec  3 16:09:05 np0005544708 podman[85561]: 2025-12-03 21:09:05.656252364 +0000 UTC m=+0.044250436 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:09:05 np0005544708 ceph-mgr[75500]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  3 16:09:05 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:05 np0005544708 podman[85561]: 2025-12-03 21:09:05.786025045 +0000 UTC m=+0.174023207 container init b3bacbb6bd5d2ac90c3ddef9f3b5d47c83f4db7dfc94823a582db0276a2528c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:09:05 np0005544708 podman[85561]: 2025-12-03 21:09:05.798222534 +0000 UTC m=+0.186220596 container start b3bacbb6bd5d2ac90c3ddef9f3b5d47c83f4db7dfc94823a582db0276a2528c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:09:05 np0005544708 podman[85561]: 2025-12-03 21:09:05.801943501 +0000 UTC m=+0.189941653 container attach b3bacbb6bd5d2ac90c3ddef9f3b5d47c83f4db7dfc94823a582db0276a2528c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_napier, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:09:05 np0005544708 tender_napier[85578]: 167 167
Dec  3 16:09:05 np0005544708 systemd[1]: libpod-b3bacbb6bd5d2ac90c3ddef9f3b5d47c83f4db7dfc94823a582db0276a2528c0.scope: Deactivated successfully.
Dec  3 16:09:05 np0005544708 podman[85561]: 2025-12-03 21:09:05.80683761 +0000 UTC m=+0.194835702 container died b3bacbb6bd5d2ac90c3ddef9f3b5d47c83f4db7dfc94823a582db0276a2528c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_napier, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True)
Dec  3 16:09:05 np0005544708 systemd[1]: var-lib-containers-storage-overlay-f4b6855a5657655f53d145eb6d8837f7c961041453902f23752303535f4d5633-merged.mount: Deactivated successfully.
Dec  3 16:09:05 np0005544708 podman[85561]: 2025-12-03 21:09:05.86554161 +0000 UTC m=+0.253539702 container remove b3bacbb6bd5d2ac90c3ddef9f3b5d47c83f4db7dfc94823a582db0276a2528c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec  3 16:09:05 np0005544708 systemd[1]: libpod-conmon-b3bacbb6bd5d2ac90c3ddef9f3b5d47c83f4db7dfc94823a582db0276a2528c0.scope: Deactivated successfully.
Dec  3 16:09:06 np0005544708 ceph-mon[75204]: Deploying daemon osd.0 on compute-0
Dec  3 16:09:06 np0005544708 podman[85609]: 2025-12-03 21:09:06.172149546 +0000 UTC m=+0.065048161 container create 01df66932a42f22efa983977d6615177436d823b61a3cd92165e71abd955354e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-0-activate-test, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec  3 16:09:06 np0005544708 systemd[1]: Started libpod-conmon-01df66932a42f22efa983977d6615177436d823b61a3cd92165e71abd955354e.scope.
Dec  3 16:09:06 np0005544708 podman[85609]: 2025-12-03 21:09:06.146258806 +0000 UTC m=+0.039157461 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:09:06 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:06 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/732a454788bcef6a58ffa0718cbb7e894341dae6b3d25eba3a92d6c67062602e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:06 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/732a454788bcef6a58ffa0718cbb7e894341dae6b3d25eba3a92d6c67062602e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:06 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/732a454788bcef6a58ffa0718cbb7e894341dae6b3d25eba3a92d6c67062602e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:06 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/732a454788bcef6a58ffa0718cbb7e894341dae6b3d25eba3a92d6c67062602e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:06 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/732a454788bcef6a58ffa0718cbb7e894341dae6b3d25eba3a92d6c67062602e/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:06 np0005544708 podman[85609]: 2025-12-03 21:09:06.297175881 +0000 UTC m=+0.190074556 container init 01df66932a42f22efa983977d6615177436d823b61a3cd92165e71abd955354e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-0-activate-test, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec  3 16:09:06 np0005544708 podman[85609]: 2025-12-03 21:09:06.303699335 +0000 UTC m=+0.196597950 container start 01df66932a42f22efa983977d6615177436d823b61a3cd92165e71abd955354e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-0-activate-test, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec  3 16:09:06 np0005544708 podman[85609]: 2025-12-03 21:09:06.307474721 +0000 UTC m=+0.200373416 container attach 01df66932a42f22efa983977d6615177436d823b61a3cd92165e71abd955354e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-0-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:09:06 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-0-activate-test[85625]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Dec  3 16:09:06 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-0-activate-test[85625]:                            [--no-systemd] [--no-tmpfs]
Dec  3 16:09:06 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-0-activate-test[85625]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec  3 16:09:06 np0005544708 systemd[1]: libpod-01df66932a42f22efa983977d6615177436d823b61a3cd92165e71abd955354e.scope: Deactivated successfully.
Dec  3 16:09:06 np0005544708 podman[85609]: 2025-12-03 21:09:06.497813541 +0000 UTC m=+0.390712186 container died 01df66932a42f22efa983977d6615177436d823b61a3cd92165e71abd955354e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-0-activate-test, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec  3 16:09:06 np0005544708 systemd[1]: var-lib-containers-storage-overlay-732a454788bcef6a58ffa0718cbb7e894341dae6b3d25eba3a92d6c67062602e-merged.mount: Deactivated successfully.
Dec  3 16:09:06 np0005544708 podman[85609]: 2025-12-03 21:09:06.546265121 +0000 UTC m=+0.439163746 container remove 01df66932a42f22efa983977d6615177436d823b61a3cd92165e71abd955354e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-0-activate-test, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:09:06 np0005544708 systemd[1]: libpod-conmon-01df66932a42f22efa983977d6615177436d823b61a3cd92165e71abd955354e.scope: Deactivated successfully.
Dec  3 16:09:06 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:09:06 np0005544708 systemd[1]: Reloading.
Dec  3 16:09:06 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:09:06 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:09:07 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 16:09:07 np0005544708 systemd[1]: Reloading.
Dec  3 16:09:07 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:09:07 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:09:07 np0005544708 systemd[1]: Starting Ceph osd.0 for c21de27e-a7fd-594b-8324-0697ba9aab3a...
Dec  3 16:09:07 np0005544708 podman[85783]: 2025-12-03 21:09:07.676880957 +0000 UTC m=+0.051613506 container create 7ab8241e6960f866d0752598f0481dfff0a40fba60997b82e36328d4e487e60e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-0-activate, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:09:07 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:07 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93aeabd5fb2036a9a60e6d6fc8084e7ed274504724adef94b91701868f95184d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:07 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93aeabd5fb2036a9a60e6d6fc8084e7ed274504724adef94b91701868f95184d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:07 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93aeabd5fb2036a9a60e6d6fc8084e7ed274504724adef94b91701868f95184d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:07 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93aeabd5fb2036a9a60e6d6fc8084e7ed274504724adef94b91701868f95184d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:07 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93aeabd5fb2036a9a60e6d6fc8084e7ed274504724adef94b91701868f95184d/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:07 np0005544708 podman[85783]: 2025-12-03 21:09:07.657506251 +0000 UTC m=+0.032238830 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:09:07 np0005544708 podman[85783]: 2025-12-03 21:09:07.754267068 +0000 UTC m=+0.128999657 container init 7ab8241e6960f866d0752598f0481dfff0a40fba60997b82e36328d4e487e60e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-0-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:09:07 np0005544708 ceph-mgr[75500]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  3 16:09:07 np0005544708 podman[85783]: 2025-12-03 21:09:07.767876476 +0000 UTC m=+0.142609055 container start 7ab8241e6960f866d0752598f0481dfff0a40fba60997b82e36328d4e487e60e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-0-activate, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 16:09:07 np0005544708 podman[85783]: 2025-12-03 21:09:07.772523731 +0000 UTC m=+0.147256300 container attach 7ab8241e6960f866d0752598f0481dfff0a40fba60997b82e36328d4e487e60e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-0-activate, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 16:09:07 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-0-activate[85799]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 16:09:07 np0005544708 bash[85783]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 16:09:07 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-0-activate[85799]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 16:09:07 np0005544708 bash[85783]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 16:09:08 np0005544708 lvm[85886]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:09:08 np0005544708 lvm[85886]: VG ceph_vg2 finished
Dec  3 16:09:08 np0005544708 lvm[85887]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:09:08 np0005544708 lvm[85887]: VG ceph_vg1 finished
Dec  3 16:09:08 np0005544708 lvm[85883]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:09:08 np0005544708 lvm[85883]: VG ceph_vg0 finished
Dec  3 16:09:08 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-0-activate[85799]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec  3 16:09:08 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-0-activate[85799]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 16:09:08 np0005544708 bash[85783]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec  3 16:09:08 np0005544708 bash[85783]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 16:09:08 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-0-activate[85799]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 16:09:08 np0005544708 bash[85783]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 16:09:08 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-0-activate[85799]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  3 16:09:08 np0005544708 bash[85783]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  3 16:09:08 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-0-activate[85799]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec  3 16:09:08 np0005544708 bash[85783]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec  3 16:09:08 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-0-activate[85799]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec  3 16:09:08 np0005544708 bash[85783]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec  3 16:09:08 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-0-activate[85799]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec  3 16:09:08 np0005544708 bash[85783]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec  3 16:09:08 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-0-activate[85799]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  3 16:09:08 np0005544708 bash[85783]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  3 16:09:08 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-0-activate[85799]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  3 16:09:08 np0005544708 bash[85783]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  3 16:09:08 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-0-activate[85799]: --> ceph-volume lvm activate successful for osd ID: 0
Dec  3 16:09:08 np0005544708 bash[85783]: --> ceph-volume lvm activate successful for osd ID: 0
Dec  3 16:09:08 np0005544708 systemd[1]: libpod-7ab8241e6960f866d0752598f0481dfff0a40fba60997b82e36328d4e487e60e.scope: Deactivated successfully.
Dec  3 16:09:08 np0005544708 podman[85783]: 2025-12-03 21:09:08.950761031 +0000 UTC m=+1.325493590 container died 7ab8241e6960f866d0752598f0481dfff0a40fba60997b82e36328d4e487e60e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-0-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  3 16:09:08 np0005544708 systemd[1]: libpod-7ab8241e6960f866d0752598f0481dfff0a40fba60997b82e36328d4e487e60e.scope: Consumed 1.709s CPU time.
Dec  3 16:09:08 np0005544708 systemd[1]: var-lib-containers-storage-overlay-93aeabd5fb2036a9a60e6d6fc8084e7ed274504724adef94b91701868f95184d-merged.mount: Deactivated successfully.
Dec  3 16:09:09 np0005544708 podman[85783]: 2025-12-03 21:09:09.01193512 +0000 UTC m=+1.386667669 container remove 7ab8241e6960f866d0752598f0481dfff0a40fba60997b82e36328d4e487e60e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-0-activate, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:09:09 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 16:09:09 np0005544708 podman[86040]: 2025-12-03 21:09:09.289110974 +0000 UTC m=+0.052800480 container create fbaf3a19f1641818d960e065ac83d4b982a8620013c915c30fea979d7a9b5f7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:09:09 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94de65b3a1b4cc1c3ae2fb2f5a4f4f678207e02cf957939c05c5f4000d0b6c8e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:09 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94de65b3a1b4cc1c3ae2fb2f5a4f4f678207e02cf957939c05c5f4000d0b6c8e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:09 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94de65b3a1b4cc1c3ae2fb2f5a4f4f678207e02cf957939c05c5f4000d0b6c8e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:09 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94de65b3a1b4cc1c3ae2fb2f5a4f4f678207e02cf957939c05c5f4000d0b6c8e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:09 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94de65b3a1b4cc1c3ae2fb2f5a4f4f678207e02cf957939c05c5f4000d0b6c8e/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:09 np0005544708 podman[86040]: 2025-12-03 21:09:09.333543593 +0000 UTC m=+0.097233099 container init fbaf3a19f1641818d960e065ac83d4b982a8620013c915c30fea979d7a9b5f7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:09:09 np0005544708 podman[86040]: 2025-12-03 21:09:09.342368753 +0000 UTC m=+0.106058259 container start fbaf3a19f1641818d960e065ac83d4b982a8620013c915c30fea979d7a9b5f7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle)
Dec  3 16:09:09 np0005544708 bash[86040]: fbaf3a19f1641818d960e065ac83d4b982a8620013c915c30fea979d7a9b5f7d
Dec  3 16:09:09 np0005544708 podman[86040]: 2025-12-03 21:09:09.263523442 +0000 UTC m=+0.027213038 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:09:09 np0005544708 systemd[1]: Started Ceph osd.0 for c21de27e-a7fd-594b-8324-0697ba9aab3a.
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: set uid:gid to 167:167 (ceph:ceph)
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: pidfile_write: ignore empty --pid-file
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da4000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da4000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da4000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da4000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da4000 /var/lib/ceph/osd/ceph-0/block) close
Dec  3 16:09:09 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:09:09 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:09 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:09:09 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da4000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da4000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da4000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:09 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Dec  3 16:09:09 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da4000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da4000 /var/lib/ceph/osd/ceph-0/block) close
Dec  3 16:09:09 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:09:09 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:09:09 np0005544708 ceph-mgr[75500]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Dec  3 16:09:09 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da4000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da4000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da4000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da4000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da4000 /var/lib/ceph/osd/ceph-0/block) close
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da4000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da4000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da4000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da4000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da4000 /var/lib/ceph/osd/ceph-0/block) close
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da4000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da4000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da4000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da4000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da4000 /var/lib/ceph/osd/ceph-0/block) close
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da4000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da4000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da4000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da4000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da4400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da4400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da4400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da4400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da4400 /var/lib/ceph/osd/ceph-0/block) close
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da4000 /var/lib/ceph/osd/ceph-0/block) close
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: load: jerasure load: lrc 
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da5c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da5c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da5c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da5c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da5c00 /var/lib/ceph/osd/ceph-0/block) close
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da5c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da5c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da5c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da5c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da5c00 /var/lib/ceph/osd/ceph-0/block) close
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da5c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da5c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da5c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da5c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da5c00 /var/lib/ceph/osd/ceph-0/block) close
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da5c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da5c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da5c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da5c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da5c00 /var/lib/ceph/osd/ceph-0/block) close
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da5c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da5c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da5c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da5c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da5c00 /var/lib/ceph/osd/ceph-0/block) close
Dec  3 16:09:09 np0005544708 ceph-mgr[75500]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da5c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da5c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da5c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561448da5c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561449a3b800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561449a3b800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561449a3b800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561449a3b800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bluefs mount
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bluefs mount shared_bdev_used = 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: RocksDB version: 7.9.2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Git sha 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Compile date 2025-10-30 15:42:43
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: DB SUMMARY
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: DB Session ID:  STVQO16ELC5LNUOQD2NX
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: CURRENT file:  CURRENT
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: IDENTITY file:  IDENTITY
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                         Options.error_if_exists: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                       Options.create_if_missing: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                         Options.paranoid_checks: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                                     Options.env: 0x561448c35ea0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                                Options.info_log: 0x561449c868a0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.max_file_opening_threads: 16
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                              Options.statistics: (nil)
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                               Options.use_fsync: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                       Options.max_log_file_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                         Options.allow_fallocate: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.use_direct_reads: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.create_missing_column_families: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                              Options.db_log_dir: 
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                                 Options.wal_dir: db.wal
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.advise_random_on_open: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.write_buffer_manager: 0x561448c9ab40
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                            Options.rate_limiter: (nil)
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.unordered_write: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                               Options.row_cache: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                              Options.wal_filter: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.allow_ingest_behind: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.two_write_queues: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.manual_wal_flush: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.wal_compression: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.atomic_flush: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                 Options.log_readahead_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.allow_data_in_errors: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.db_host_id: __hostname__
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.max_background_jobs: 4
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.max_background_compactions: -1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.max_subcompactions: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.max_open_files: -1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.bytes_per_sync: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.max_background_flushes: -1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Compression algorithms supported:
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: #011kZSTD supported: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: #011kXpressCompression supported: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: #011kBZip2Compression supported: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: #011kLZ4Compression supported: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: #011kZlibCompression supported: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: #011kSnappyCompression supported: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561449c86c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561448c398d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561449c86c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561448c398d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561449c86c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561448c398d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561449c86c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561448c398d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561449c86c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561448c398d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561449c86c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561448c398d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561449c86c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561448c398d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561449c86c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561448c39a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561449c86c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561448c39a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561449c86c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561448c39a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d2169ef4-0915-4f24-b94a-0a07278c7229
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796149820187, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796149821971, "job": 1, "event": "recovery_finished"}
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: freelist init
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: freelist _read_cfg
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bluestore(/var/lib/ceph/osd/ceph-0) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bluefs umount
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561449a3b800 /var/lib/ceph/osd/ceph-0/block) close
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561449a3b800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561449a3b800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561449a3b800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bdev(0x561449a3b800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bluefs mount
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bluefs mount shared_bdev_used = 27262976
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: RocksDB version: 7.9.2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Git sha 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Compile date 2025-10-30 15:42:43
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: DB SUMMARY
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: DB Session ID:  STVQO16ELC5LNUOQD2NW
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: CURRENT file:  CURRENT
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: IDENTITY file:  IDENTITY
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                         Options.error_if_exists: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                       Options.create_if_missing: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                         Options.paranoid_checks: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                                     Options.env: 0x561449e56a80
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                                Options.info_log: 0x561449c86960
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.max_file_opening_threads: 16
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                              Options.statistics: (nil)
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                               Options.use_fsync: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                       Options.max_log_file_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                         Options.allow_fallocate: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.use_direct_reads: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.create_missing_column_families: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                              Options.db_log_dir: 
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                                 Options.wal_dir: db.wal
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.advise_random_on_open: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.write_buffer_manager: 0x561448c9b900
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                            Options.rate_limiter: (nil)
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.unordered_write: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                               Options.row_cache: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                              Options.wal_filter: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.allow_ingest_behind: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.two_write_queues: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.manual_wal_flush: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.wal_compression: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.atomic_flush: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                 Options.log_readahead_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.allow_data_in_errors: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.db_host_id: __hostname__
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.max_background_jobs: 4
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.max_background_compactions: -1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.max_subcompactions: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.max_open_files: -1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.bytes_per_sync: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.max_background_flushes: -1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Compression algorithms supported:
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: #011kZSTD supported: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: #011kXpressCompression supported: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: #011kBZip2Compression supported: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: #011kLZ4Compression supported: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: #011kZlibCompression supported: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: #011kSnappyCompression supported: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561449c86bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561448c398d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561449c86bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561448c398d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561449c86bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561448c398d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561449c86bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561448c398d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561449c86bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561448c398d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561449c86bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561448c398d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561449c86bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561448c398d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561449c870c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561448c39a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561449c870c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561448c39a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561449c870c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561448c39a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d2169ef4-0915-4f24-b94a-0a07278c7229
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796149884971, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796149904165, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764796149, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2169ef4-0915-4f24-b94a-0a07278c7229", "db_session_id": "STVQO16ELC5LNUOQD2NW", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796149907370, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764796149, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2169ef4-0915-4f24-b94a-0a07278c7229", "db_session_id": "STVQO16ELC5LNUOQD2NW", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796149910316, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764796149, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2169ef4-0915-4f24-b94a-0a07278c7229", "db_session_id": "STVQO16ELC5LNUOQD2NW", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796149912101, "job": 1, "event": "recovery_finished"}
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x561449ea0000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: DB pointer 0x561449e40000
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561448c398d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561448c398d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: _get_class not permitted to load lua
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: _get_class not permitted to load sdk
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: osd.0 0 load_pgs
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: osd.0 0 load_pgs opened 0 pgs
Dec  3 16:09:09 np0005544708 ceph-osd[86059]: osd.0 0 log_to_monitors true
Dec  3 16:09:09 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-0[86055]: 2025-12-03T21:09:09.943+0000 7fb861a258c0 -1 osd.0 0 log_to_monitors true
Dec  3 16:09:09 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Dec  3 16:09:09 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1181083466,v1:192.168.122.100:6803/1181083466]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Dec  3 16:09:10 np0005544708 podman[86592]: 2025-12-03 21:09:10.011910866 +0000 UTC m=+0.040750574 container create 7a74ab5b6fb41a56f4def017af9b2a6a9792b5a1cce751d3db90d7137a9f5440 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec  3 16:09:10 np0005544708 systemd[1]: Started libpod-conmon-7a74ab5b6fb41a56f4def017af9b2a6a9792b5a1cce751d3db90d7137a9f5440.scope.
Dec  3 16:09:10 np0005544708 podman[86592]: 2025-12-03 21:09:09.998172905 +0000 UTC m=+0.027012623 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:09:10 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:10 np0005544708 podman[86592]: 2025-12-03 21:09:10.11388164 +0000 UTC m=+0.142721438 container init 7a74ab5b6fb41a56f4def017af9b2a6a9792b5a1cce751d3db90d7137a9f5440 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_goldwasser, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:09:10 np0005544708 podman[86592]: 2025-12-03 21:09:10.126717412 +0000 UTC m=+0.155557160 container start 7a74ab5b6fb41a56f4def017af9b2a6a9792b5a1cce751d3db90d7137a9f5440 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:09:10 np0005544708 podman[86592]: 2025-12-03 21:09:10.131173213 +0000 UTC m=+0.160013021 container attach 7a74ab5b6fb41a56f4def017af9b2a6a9792b5a1cce751d3db90d7137a9f5440 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_goldwasser, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:09:10 np0005544708 charming_goldwasser[86609]: 167 167
Dec  3 16:09:10 np0005544708 systemd[1]: libpod-7a74ab5b6fb41a56f4def017af9b2a6a9792b5a1cce751d3db90d7137a9f5440.scope: Deactivated successfully.
Dec  3 16:09:10 np0005544708 conmon[86609]: conmon 7a74ab5b6fb41a56f4de <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7a74ab5b6fb41a56f4def017af9b2a6a9792b5a1cce751d3db90d7137a9f5440.scope/container/memory.events
Dec  3 16:09:10 np0005544708 podman[86592]: 2025-12-03 21:09:10.136560814 +0000 UTC m=+0.165400562 container died 7a74ab5b6fb41a56f4def017af9b2a6a9792b5a1cce751d3db90d7137a9f5440 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec  3 16:09:10 np0005544708 systemd[1]: var-lib-containers-storage-overlay-bdb307fa6a0c82e1628436490bed9a9efcd4cb7b79781bdf7e9fceddedc5d8c0-merged.mount: Deactivated successfully.
Dec  3 16:09:10 np0005544708 podman[86592]: 2025-12-03 21:09:10.190068757 +0000 UTC m=+0.218908515 container remove 7a74ab5b6fb41a56f4def017af9b2a6a9792b5a1cce751d3db90d7137a9f5440 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:09:10 np0005544708 systemd[1]: libpod-conmon-7a74ab5b6fb41a56f4def017af9b2a6a9792b5a1cce751d3db90d7137a9f5440.scope: Deactivated successfully.
Dec  3 16:09:10 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:10 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:10 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec  3 16:09:10 np0005544708 ceph-mon[75204]: Deploying daemon osd.1 on compute-0
Dec  3 16:09:10 np0005544708 ceph-mon[75204]: from='osd.0 [v2:192.168.122.100:6802/1181083466,v1:192.168.122.100:6803/1181083466]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Dec  3 16:09:10 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Dec  3 16:09:10 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  3 16:09:10 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1181083466,v1:192.168.122.100:6803/1181083466]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec  3 16:09:10 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Dec  3 16:09:10 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Dec  3 16:09:10 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Dec  3 16:09:10 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1181083466,v1:192.168.122.100:6803/1181083466]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec  3 16:09:10 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.02 at location {host=compute-0,root=default}
Dec  3 16:09:10 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  3 16:09:10 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec  3 16:09:10 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  3 16:09:10 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec  3 16:09:10 np0005544708 ceph-mgr[75500]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  3 16:09:10 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  3 16:09:10 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec  3 16:09:10 np0005544708 ceph-mgr[75500]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 16:09:10 np0005544708 ceph-mgr[75500]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  3 16:09:10 np0005544708 podman[86639]: 2025-12-03 21:09:10.531553066 +0000 UTC m=+0.052676338 container create 82b748883acf228d74b734608c59a679b018f0ba374661caffeadbd90320a9b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-1-activate-test, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  3 16:09:10 np0005544708 systemd[1]: Started libpod-conmon-82b748883acf228d74b734608c59a679b018f0ba374661caffeadbd90320a9b6.scope.
Dec  3 16:09:10 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:10 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0813c3f1db70882f987d58f5e84731b234c624483c74d6d6aa2deccae866249d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:10 np0005544708 podman[86639]: 2025-12-03 21:09:10.506728708 +0000 UTC m=+0.027852030 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:09:10 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0813c3f1db70882f987d58f5e84731b234c624483c74d6d6aa2deccae866249d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:10 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0813c3f1db70882f987d58f5e84731b234c624483c74d6d6aa2deccae866249d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:10 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0813c3f1db70882f987d58f5e84731b234c624483c74d6d6aa2deccae866249d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:10 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0813c3f1db70882f987d58f5e84731b234c624483c74d6d6aa2deccae866249d/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:10 np0005544708 podman[86639]: 2025-12-03 21:09:10.615197755 +0000 UTC m=+0.136321007 container init 82b748883acf228d74b734608c59a679b018f0ba374661caffeadbd90320a9b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-1-activate-test, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:09:10 np0005544708 podman[86639]: 2025-12-03 21:09:10.627943935 +0000 UTC m=+0.149067177 container start 82b748883acf228d74b734608c59a679b018f0ba374661caffeadbd90320a9b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-1-activate-test, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:09:10 np0005544708 podman[86639]: 2025-12-03 21:09:10.631486928 +0000 UTC m=+0.152610170 container attach 82b748883acf228d74b734608c59a679b018f0ba374661caffeadbd90320a9b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-1-activate-test, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:09:10 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-1-activate-test[86655]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Dec  3 16:09:10 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-1-activate-test[86655]:                            [--no-systemd] [--no-tmpfs]
Dec  3 16:09:10 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-1-activate-test[86655]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec  3 16:09:10 np0005544708 systemd[1]: libpod-82b748883acf228d74b734608c59a679b018f0ba374661caffeadbd90320a9b6.scope: Deactivated successfully.
Dec  3 16:09:10 np0005544708 podman[86639]: 2025-12-03 21:09:10.823841608 +0000 UTC m=+0.344964850 container died 82b748883acf228d74b734608c59a679b018f0ba374661caffeadbd90320a9b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-1-activate-test, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:09:10 np0005544708 systemd[1]: var-lib-containers-storage-overlay-0813c3f1db70882f987d58f5e84731b234c624483c74d6d6aa2deccae866249d-merged.mount: Deactivated successfully.
Dec  3 16:09:10 np0005544708 podman[86639]: 2025-12-03 21:09:10.863177152 +0000 UTC m=+0.384300384 container remove 82b748883acf228d74b734608c59a679b018f0ba374661caffeadbd90320a9b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-1-activate-test, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:09:10 np0005544708 systemd[1]: libpod-conmon-82b748883acf228d74b734608c59a679b018f0ba374661caffeadbd90320a9b6.scope: Deactivated successfully.
Dec  3 16:09:10 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec  3 16:09:10 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec  3 16:09:11 np0005544708 systemd[1]: Reloading.
Dec  3 16:09:11 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 16:09:11 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:09:11 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:09:11 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Dec  3 16:09:11 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  3 16:09:11 np0005544708 ceph-mon[75204]: from='osd.0 [v2:192.168.122.100:6802/1181083466,v1:192.168.122.100:6803/1181083466]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec  3 16:09:11 np0005544708 ceph-mon[75204]: from='osd.0 [v2:192.168.122.100:6802/1181083466,v1:192.168.122.100:6803/1181083466]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec  3 16:09:11 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1181083466,v1:192.168.122.100:6803/1181083466]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  3 16:09:11 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Dec  3 16:09:11 np0005544708 ceph-osd[86059]: osd.0 0 done with init, starting boot process
Dec  3 16:09:11 np0005544708 ceph-osd[86059]: osd.0 0 start_boot
Dec  3 16:09:11 np0005544708 ceph-osd[86059]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec  3 16:09:11 np0005544708 ceph-osd[86059]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec  3 16:09:11 np0005544708 ceph-osd[86059]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec  3 16:09:11 np0005544708 ceph-osd[86059]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec  3 16:09:11 np0005544708 ceph-osd[86059]: osd.0 0  bench count 12288000 bsize 4 KiB
Dec  3 16:09:11 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Dec  3 16:09:11 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  3 16:09:11 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec  3 16:09:11 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  3 16:09:11 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec  3 16:09:11 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  3 16:09:11 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec  3 16:09:11 np0005544708 ceph-mgr[75500]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  3 16:09:11 np0005544708 ceph-mgr[75500]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  3 16:09:11 np0005544708 ceph-mgr[75500]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 16:09:11 np0005544708 ceph-mgr[75500]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1181083466; not ready for session (expect reconnect)
Dec  3 16:09:11 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  3 16:09:11 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec  3 16:09:11 np0005544708 ceph-mgr[75500]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  3 16:09:11 np0005544708 systemd[1]: Reloading.
Dec  3 16:09:11 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:09:11 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:09:11 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:09:11 np0005544708 systemd[1]: Starting Ceph osd.1 for c21de27e-a7fd-594b-8324-0697ba9aab3a...
Dec  3 16:09:11 np0005544708 ceph-mgr[75500]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  3 16:09:12 np0005544708 podman[86815]: 2025-12-03 21:09:12.102009119 +0000 UTC m=+0.073021993 container create 390b5ae1b119b449f318cfa243368a97044a30c5680633299749cead7fe8fd98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-1-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  3 16:09:12 np0005544708 podman[86815]: 2025-12-03 21:09:12.059913829 +0000 UTC m=+0.030926763 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:09:12 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:12 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b20058cc80da8a68b7b079bee61797b1fd9f08e659127efaf366613702c7213/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:12 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b20058cc80da8a68b7b079bee61797b1fd9f08e659127efaf366613702c7213/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:12 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b20058cc80da8a68b7b079bee61797b1fd9f08e659127efaf366613702c7213/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:12 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b20058cc80da8a68b7b079bee61797b1fd9f08e659127efaf366613702c7213/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:12 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b20058cc80da8a68b7b079bee61797b1fd9f08e659127efaf366613702c7213/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:12 np0005544708 podman[86815]: 2025-12-03 21:09:12.208425164 +0000 UTC m=+0.179438068 container init 390b5ae1b119b449f318cfa243368a97044a30c5680633299749cead7fe8fd98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec  3 16:09:12 np0005544708 podman[86815]: 2025-12-03 21:09:12.220172685 +0000 UTC m=+0.191185609 container start 390b5ae1b119b449f318cfa243368a97044a30c5680633299749cead7fe8fd98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-1-activate, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  3 16:09:12 np0005544708 podman[86815]: 2025-12-03 21:09:12.231872104 +0000 UTC m=+0.202885488 container attach 390b5ae1b119b449f318cfa243368a97044a30c5680633299749cead7fe8fd98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-1-activate, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:09:12 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-1-activate[86830]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 16:09:12 np0005544708 bash[86815]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 16:09:12 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-1-activate[86830]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 16:09:12 np0005544708 bash[86815]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 16:09:12 np0005544708 ceph-mgr[75500]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1181083466; not ready for session (expect reconnect)
Dec  3 16:09:12 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  3 16:09:12 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec  3 16:09:12 np0005544708 ceph-mgr[75500]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  3 16:09:12 np0005544708 ceph-mon[75204]: from='osd.0 [v2:192.168.122.100:6802/1181083466,v1:192.168.122.100:6803/1181083466]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  3 16:09:13 np0005544708 lvm[86915]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:09:13 np0005544708 lvm[86915]: VG ceph_vg0 finished
Dec  3 16:09:13 np0005544708 lvm[86916]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:09:13 np0005544708 lvm[86916]: VG ceph_vg1 finished
Dec  3 16:09:13 np0005544708 lvm[86918]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:09:13 np0005544708 lvm[86918]: VG ceph_vg2 finished
Dec  3 16:09:13 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-1-activate[86830]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec  3 16:09:13 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-1-activate[86830]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 16:09:13 np0005544708 bash[86815]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec  3 16:09:13 np0005544708 bash[86815]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 16:09:13 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-1-activate[86830]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 16:09:13 np0005544708 bash[86815]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 16:09:13 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 16:09:13 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-1-activate[86830]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  3 16:09:13 np0005544708 bash[86815]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  3 16:09:13 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-1-activate[86830]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec  3 16:09:13 np0005544708 bash[86815]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec  3 16:09:13 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-1-activate[86830]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec  3 16:09:13 np0005544708 bash[86815]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec  3 16:09:13 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-1-activate[86830]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec  3 16:09:13 np0005544708 bash[86815]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec  3 16:09:13 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-1-activate[86830]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec  3 16:09:13 np0005544708 bash[86815]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec  3 16:09:13 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-1-activate[86830]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  3 16:09:13 np0005544708 bash[86815]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  3 16:09:13 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-1-activate[86830]: --> ceph-volume lvm activate successful for osd ID: 1
Dec  3 16:09:13 np0005544708 bash[86815]: --> ceph-volume lvm activate successful for osd ID: 1
Dec  3 16:09:13 np0005544708 systemd[1]: libpod-390b5ae1b119b449f318cfa243368a97044a30c5680633299749cead7fe8fd98.scope: Deactivated successfully.
Dec  3 16:09:13 np0005544708 systemd[1]: libpod-390b5ae1b119b449f318cfa243368a97044a30c5680633299749cead7fe8fd98.scope: Consumed 1.686s CPU time.
Dec  3 16:09:13 np0005544708 podman[86815]: 2025-12-03 21:09:13.408271445 +0000 UTC m=+1.379284359 container died 390b5ae1b119b449f318cfa243368a97044a30c5680633299749cead7fe8fd98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-1-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:09:13 np0005544708 ceph-mgr[75500]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1181083466; not ready for session (expect reconnect)
Dec  3 16:09:13 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  3 16:09:13 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec  3 16:09:13 np0005544708 ceph-mgr[75500]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  3 16:09:13 np0005544708 systemd[1]: var-lib-containers-storage-overlay-5b20058cc80da8a68b7b079bee61797b1fd9f08e659127efaf366613702c7213-merged.mount: Deactivated successfully.
Dec  3 16:09:13 np0005544708 podman[86815]: 2025-12-03 21:09:13.517689981 +0000 UTC m=+1.488702875 container remove 390b5ae1b119b449f318cfa243368a97044a30c5680633299749cead7fe8fd98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-1-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec  3 16:09:13 np0005544708 ceph-mgr[75500]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec  3 16:09:13 np0005544708 podman[87075]: 2025-12-03 21:09:13.806032614 +0000 UTC m=+0.062242163 container create 947e483d8391b48b3468ce765508e3878efe6194e4c13a9b9406fbc1894cb209 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:09:13 np0005544708 podman[87075]: 2025-12-03 21:09:13.771027228 +0000 UTC m=+0.027236787 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:09:14 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24ff7bf7fd4b63083be2563f84919eef2a779ac57fa3c5dd2cd4a0d37e55d871/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:14 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24ff7bf7fd4b63083be2563f84919eef2a779ac57fa3c5dd2cd4a0d37e55d871/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:14 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24ff7bf7fd4b63083be2563f84919eef2a779ac57fa3c5dd2cd4a0d37e55d871/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:14 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24ff7bf7fd4b63083be2563f84919eef2a779ac57fa3c5dd2cd4a0d37e55d871/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:14 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24ff7bf7fd4b63083be2563f84919eef2a779ac57fa3c5dd2cd4a0d37e55d871/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:14 np0005544708 podman[87075]: 2025-12-03 21:09:14.03978691 +0000 UTC m=+0.295996439 container init 947e483d8391b48b3468ce765508e3878efe6194e4c13a9b9406fbc1894cb209 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:09:14 np0005544708 podman[87075]: 2025-12-03 21:09:14.048652601 +0000 UTC m=+0.304862110 container start 947e483d8391b48b3468ce765508e3878efe6194e4c13a9b9406fbc1894cb209 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  3 16:09:14 np0005544708 bash[87075]: 947e483d8391b48b3468ce765508e3878efe6194e4c13a9b9406fbc1894cb209
Dec  3 16:09:14 np0005544708 systemd[1]: Started Ceph osd.1 for c21de27e-a7fd-594b-8324-0697ba9aab3a.
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: set uid:gid to 167:167 (ceph:ceph)
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: pidfile_write: ignore empty --pid-file
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c040000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c040000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c040000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c040000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c040000 /var/lib/ceph/osd/ceph-1/block) close
Dec  3 16:09:14 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c040000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c040000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c040000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c040000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c040000 /var/lib/ceph/osd/ceph-1/block) close
Dec  3 16:09:14 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:14 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c040000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c040000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c040000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c040000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c040000 /var/lib/ceph/osd/ceph-1/block) close
Dec  3 16:09:14 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:14 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Dec  3 16:09:14 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec  3 16:09:14 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:09:14 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:09:14 np0005544708 ceph-mgr[75500]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Dec  3 16:09:14 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c040000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c040000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c040000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c040000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c040000 /var/lib/ceph/osd/ceph-1/block) close
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c040000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c040000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c040000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c040000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c040000 /var/lib/ceph/osd/ceph-1/block) close
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c040000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c040000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c040000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c040000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c040400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c040400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c040400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c040400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c040400 /var/lib/ceph/osd/ceph-1/block) close
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c040000 /var/lib/ceph/osd/ceph-1/block) close
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: load: jerasure load: lrc 
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c041c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c041c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c041c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c041c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c041c00 /var/lib/ceph/osd/ceph-1/block) close
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c041c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c041c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c041c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c041c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c041c00 /var/lib/ceph/osd/ceph-1/block) close
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c041c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c041c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c041c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c041c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c041c00 /var/lib/ceph/osd/ceph-1/block) close
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c041c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c041c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c041c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c041c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c041c00 /var/lib/ceph/osd/ceph-1/block) close
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c041c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c041c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c041c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c041c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c041c00 /var/lib/ceph/osd/ceph-1/block) close
Dec  3 16:09:14 np0005544708 ceph-osd[86059]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 36.196 iops: 9266.164 elapsed_sec: 0.324
Dec  3 16:09:14 np0005544708 ceph-osd[86059]: log_channel(cluster) log [WRN] : OSD bench result of 9266.164411 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  3 16:09:14 np0005544708 ceph-osd[86059]: osd.0 0 waiting for initial osdmap
Dec  3 16:09:14 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-0[86055]: 2025-12-03T21:09:14.417+0000 7fb85d9a7640 -1 osd.0 0 waiting for initial osdmap
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c041c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c041c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c041c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:14 np0005544708 ceph-osd[86059]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Dec  3 16:09:14 np0005544708 ceph-osd[86059]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Dec  3 16:09:14 np0005544708 ceph-osd[86059]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Dec  3 16:09:14 np0005544708 ceph-osd[86059]: osd.0 8 check_osdmap_features require_osd_release unknown -> tentacle
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1c041c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1ccd7800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1ccd7800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1ccd7800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1ccd7800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bluefs mount
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bluefs mount shared_bdev_used = 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: RocksDB version: 7.9.2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Git sha 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Compile date 2025-10-30 15:42:43
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: DB SUMMARY
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: DB Session ID:  D9EAUIZ0QV3Y04LRFPJ4
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: CURRENT file:  CURRENT
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: IDENTITY file:  IDENTITY
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                         Options.error_if_exists: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                       Options.create_if_missing: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                         Options.paranoid_checks: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                                     Options.env: 0x55cf1bed1ea0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                                Options.info_log: 0x55cf1cf348a0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.max_file_opening_threads: 16
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                              Options.statistics: (nil)
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                               Options.use_fsync: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                       Options.max_log_file_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                         Options.allow_fallocate: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.use_direct_reads: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.create_missing_column_families: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                              Options.db_log_dir: 
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                                 Options.wal_dir: db.wal
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.advise_random_on_open: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.write_buffer_manager: 0x55cf1bf32b40
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                            Options.rate_limiter: (nil)
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.unordered_write: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                               Options.row_cache: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                              Options.wal_filter: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.allow_ingest_behind: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.two_write_queues: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.manual_wal_flush: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.wal_compression: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.atomic_flush: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                 Options.log_readahead_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.allow_data_in_errors: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.db_host_id: __hostname__
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.max_background_jobs: 4
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.max_background_compactions: -1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.max_subcompactions: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.max_open_files: -1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.bytes_per_sync: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.max_background_flushes: -1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Compression algorithms supported:
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: #011kZSTD supported: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: #011kXpressCompression supported: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: #011kBZip2Compression supported: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: #011kLZ4Compression supported: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: #011kZlibCompression supported: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: #011kSnappyCompression supported: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cf1cf34c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cf1bed58d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cf1cf34c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cf1bed58d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cf1cf34c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cf1bed58d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cf1cf34c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cf1bed58d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:14 np0005544708 ceph-osd[86059]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  3 16:09:14 np0005544708 ceph-osd[86059]: osd.0 8 set_numa_affinity not setting numa affinity
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cf1cf34c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cf1bed58d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:14 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-0[86055]: 2025-12-03T21:09:14.441+0000 7fb8587ac640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[86059]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cf1cf34c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cf1bed58d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cf1cf34c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cf1bed58d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cf1cf34c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cf1bed5a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cf1cf34c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cf1bed5a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cf1cf34c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cf1bed5a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 755311dd-465d-446c-bb3d-52d79ad19b23
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796154448042, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796154450352, "job": 1, "event": "recovery_finished"}
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: freelist init
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: freelist _read_cfg
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bluefs umount
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1ccd7800 /var/lib/ceph/osd/ceph-1/block) close
Dec  3 16:09:14 np0005544708 ceph-mgr[75500]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1181083466; not ready for session (expect reconnect)
Dec  3 16:09:14 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  3 16:09:14 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec  3 16:09:14 np0005544708 ceph-mgr[75500]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1ccd7800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1ccd7800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1ccd7800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bdev(0x55cf1ccd7800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bluefs mount
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bluefs mount shared_bdev_used = 27262976
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: RocksDB version: 7.9.2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Git sha 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Compile date 2025-10-30 15:42:43
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: DB SUMMARY
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: DB Session ID:  D9EAUIZ0QV3Y04LRFPJ5
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: CURRENT file:  CURRENT
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: IDENTITY file:  IDENTITY
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                         Options.error_if_exists: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                       Options.create_if_missing: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                         Options.paranoid_checks: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                                     Options.env: 0x55cf1d104a80
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                                Options.info_log: 0x55cf1cf34960
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.max_file_opening_threads: 16
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                              Options.statistics: (nil)
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                               Options.use_fsync: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                       Options.max_log_file_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                         Options.allow_fallocate: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.use_direct_reads: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.create_missing_column_families: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                              Options.db_log_dir: 
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                                 Options.wal_dir: db.wal
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.advise_random_on_open: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.write_buffer_manager: 0x55cf1bf33900
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                            Options.rate_limiter: (nil)
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.unordered_write: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                               Options.row_cache: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                              Options.wal_filter: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.allow_ingest_behind: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.two_write_queues: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.manual_wal_flush: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.wal_compression: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.atomic_flush: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                 Options.log_readahead_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.allow_data_in_errors: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.db_host_id: __hostname__
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.max_background_jobs: 4
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.max_background_compactions: -1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.max_subcompactions: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.max_open_files: -1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.bytes_per_sync: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.max_background_flushes: -1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Compression algorithms supported:
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: #011kZSTD supported: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: #011kXpressCompression supported: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: #011kBZip2Compression supported: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: #011kLZ4Compression supported: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: #011kZlibCompression supported: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: #011kSnappyCompression supported: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cf1cf34bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cf1bed58d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cf1cf34bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cf1bed58d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cf1cf34bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cf1bed58d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cf1cf34bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cf1bed58d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cf1cf34bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cf1bed58d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cf1cf34bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cf1bed58d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cf1cf34bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cf1bed58d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cf1cf350c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cf1bed5a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cf1cf350c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cf1bed5a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55cf1cf350c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55cf1bed5a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 755311dd-465d-446c-bb3d-52d79ad19b23
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796154492309, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796154499591, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764796154, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "755311dd-465d-446c-bb3d-52d79ad19b23", "db_session_id": "D9EAUIZ0QV3Y04LRFPJ5", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796154503045, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764796154, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "755311dd-465d-446c-bb3d-52d79ad19b23", "db_session_id": "D9EAUIZ0QV3Y04LRFPJ5", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796154506144, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764796154, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "755311dd-465d-446c-bb3d-52d79ad19b23", "db_session_id": "D9EAUIZ0QV3Y04LRFPJ5", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796154507663, "job": 1, "event": "recovery_finished"}
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec  3 16:09:14 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:14 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:14 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec  3 16:09:14 np0005544708 ceph-mon[75204]: Deploying daemon osd.2 on compute-0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55cf1d13c000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: DB pointer 0x55cf1d0ee000
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cf1bed58d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cf1bed58d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cf1bed58d0#2 capacity: 460.80 MB usag
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: _get_class not permitted to load lua
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: _get_class not permitted to load sdk
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: osd.1 0 load_pgs
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: osd.1 0 load_pgs opened 0 pgs
Dec  3 16:09:14 np0005544708 ceph-osd[87094]: osd.1 0 log_to_monitors true
Dec  3 16:09:14 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-1[87090]: 2025-12-03T21:09:14.533+0000 7f53eff388c0 -1 osd.1 0 log_to_monitors true
Dec  3 16:09:14 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Dec  3 16:09:14 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/96508272,v1:192.168.122.100:6807/96508272]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Dec  3 16:09:14 np0005544708 podman[87634]: 2025-12-03 21:09:14.708736381 +0000 UTC m=+0.035275942 container create b70b9acd0daa5f4217fa49e4e88ac2ba81aa7e49f0ed5bf4cae9195d1f025a5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lederberg, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Dec  3 16:09:14 np0005544708 systemd[1]: Started libpod-conmon-b70b9acd0daa5f4217fa49e4e88ac2ba81aa7e49f0ed5bf4cae9195d1f025a5d.scope.
Dec  3 16:09:14 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:14 np0005544708 podman[87634]: 2025-12-03 21:09:14.692841216 +0000 UTC m=+0.019380797 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:09:14 np0005544708 podman[87634]: 2025-12-03 21:09:14.79822942 +0000 UTC m=+0.124769071 container init b70b9acd0daa5f4217fa49e4e88ac2ba81aa7e49f0ed5bf4cae9195d1f025a5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:09:14 np0005544708 podman[87634]: 2025-12-03 21:09:14.81047807 +0000 UTC m=+0.137017671 container start b70b9acd0daa5f4217fa49e4e88ac2ba81aa7e49f0ed5bf4cae9195d1f025a5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lederberg, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:09:14 np0005544708 podman[87634]: 2025-12-03 21:09:14.815954702 +0000 UTC m=+0.142494353 container attach b70b9acd0daa5f4217fa49e4e88ac2ba81aa7e49f0ed5bf4cae9195d1f025a5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lederberg, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  3 16:09:14 np0005544708 funny_lederberg[87651]: 167 167
Dec  3 16:09:14 np0005544708 systemd[1]: libpod-b70b9acd0daa5f4217fa49e4e88ac2ba81aa7e49f0ed5bf4cae9195d1f025a5d.scope: Deactivated successfully.
Dec  3 16:09:14 np0005544708 podman[87634]: 2025-12-03 21:09:14.819508255 +0000 UTC m=+0.146047826 container died b70b9acd0daa5f4217fa49e4e88ac2ba81aa7e49f0ed5bf4cae9195d1f025a5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:09:14 np0005544708 systemd[1]: var-lib-containers-storage-overlay-42154d0bedce981c3e63acb395e76a58bba22a44b1870efb681fea14c7e84f8a-merged.mount: Deactivated successfully.
Dec  3 16:09:14 np0005544708 podman[87634]: 2025-12-03 21:09:14.869761961 +0000 UTC m=+0.196301532 container remove b70b9acd0daa5f4217fa49e4e88ac2ba81aa7e49f0ed5bf4cae9195d1f025a5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:09:14 np0005544708 systemd[1]: libpod-conmon-b70b9acd0daa5f4217fa49e4e88ac2ba81aa7e49f0ed5bf4cae9195d1f025a5d.scope: Deactivated successfully.
Dec  3 16:09:15 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  3 16:09:15 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Dec  3 16:09:15 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  3 16:09:15 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/96508272,v1:192.168.122.100:6807/96508272]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec  3 16:09:15 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e9 e9: 3 total, 1 up, 3 in
Dec  3 16:09:15 np0005544708 ceph-mon[75204]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/1181083466,v1:192.168.122.100:6803/1181083466] boot
Dec  3 16:09:15 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 1 up, 3 in
Dec  3 16:09:15 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Dec  3 16:09:15 np0005544708 ceph-osd[86059]: osd.0 9 state: booting -> active
Dec  3 16:09:15 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/96508272,v1:192.168.122.100:6807/96508272]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec  3 16:09:15 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.02 at location {host=compute-0,root=default}
Dec  3 16:09:15 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  3 16:09:15 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec  3 16:09:15 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  3 16:09:15 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec  3 16:09:15 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  3 16:09:15 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec  3 16:09:15 np0005544708 ceph-mgr[75500]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  3 16:09:15 np0005544708 ceph-mgr[75500]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 16:09:15 np0005544708 podman[87679]: 2025-12-03 21:09:15.221228335 +0000 UTC m=+0.075946923 container create 182336f97dc97be09dd763af6aaa47ea45eada61fce719fec02581e14c31ec75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-2-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Dec  3 16:09:15 np0005544708 systemd[1]: Started libpod-conmon-182336f97dc97be09dd763af6aaa47ea45eada61fce719fec02581e14c31ec75.scope.
Dec  3 16:09:15 np0005544708 podman[87679]: 2025-12-03 21:09:15.192766213 +0000 UTC m=+0.047484841 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:09:15 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:15 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56fe9eb4254c2426c9fa3c340e9102dbfed1471cfc4c0356005b5d0ec45829b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:15 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56fe9eb4254c2426c9fa3c340e9102dbfed1471cfc4c0356005b5d0ec45829b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:15 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56fe9eb4254c2426c9fa3c340e9102dbfed1471cfc4c0356005b5d0ec45829b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:15 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56fe9eb4254c2426c9fa3c340e9102dbfed1471cfc4c0356005b5d0ec45829b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:15 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56fe9eb4254c2426c9fa3c340e9102dbfed1471cfc4c0356005b5d0ec45829b9/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:15 np0005544708 podman[87679]: 2025-12-03 21:09:15.321207918 +0000 UTC m=+0.175926556 container init 182336f97dc97be09dd763af6aaa47ea45eada61fce719fec02581e14c31ec75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-2-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:09:15 np0005544708 podman[87679]: 2025-12-03 21:09:15.331911656 +0000 UTC m=+0.186630214 container start 182336f97dc97be09dd763af6aaa47ea45eada61fce719fec02581e14c31ec75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-2-activate-test, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec  3 16:09:15 np0005544708 podman[87679]: 2025-12-03 21:09:15.336033061 +0000 UTC m=+0.190751659 container attach 182336f97dc97be09dd763af6aaa47ea45eada61fce719fec02581e14c31ec75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-2-activate-test, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec  3 16:09:15 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec  3 16:09:15 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec  3 16:09:15 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-2-activate-test[87695]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Dec  3 16:09:15 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-2-activate-test[87695]:                            [--no-systemd] [--no-tmpfs]
Dec  3 16:09:15 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-2-activate-test[87695]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec  3 16:09:15 np0005544708 systemd[1]: libpod-182336f97dc97be09dd763af6aaa47ea45eada61fce719fec02581e14c31ec75.scope: Deactivated successfully.
Dec  3 16:09:15 np0005544708 conmon[87695]: conmon 182336f97dc97be09dd7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-182336f97dc97be09dd763af6aaa47ea45eada61fce719fec02581e14c31ec75.scope/container/memory.events
Dec  3 16:09:15 np0005544708 podman[87679]: 2025-12-03 21:09:15.528595906 +0000 UTC m=+0.383314464 container died 182336f97dc97be09dd763af6aaa47ea45eada61fce719fec02581e14c31ec75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-2-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  3 16:09:15 np0005544708 ceph-mon[75204]: OSD bench result of 9266.164411 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  3 16:09:15 np0005544708 ceph-mon[75204]: from='osd.1 [v2:192.168.122.100:6806/96508272,v1:192.168.122.100:6807/96508272]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Dec  3 16:09:15 np0005544708 ceph-mon[75204]: from='osd.1 [v2:192.168.122.100:6806/96508272,v1:192.168.122.100:6807/96508272]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec  3 16:09:15 np0005544708 ceph-mon[75204]: osd.0 [v2:192.168.122.100:6802/1181083466,v1:192.168.122.100:6803/1181083466] boot
Dec  3 16:09:15 np0005544708 ceph-mon[75204]: from='osd.1 [v2:192.168.122.100:6806/96508272,v1:192.168.122.100:6807/96508272]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec  3 16:09:15 np0005544708 systemd[1]: var-lib-containers-storage-overlay-56fe9eb4254c2426c9fa3c340e9102dbfed1471cfc4c0356005b5d0ec45829b9-merged.mount: Deactivated successfully.
Dec  3 16:09:15 np0005544708 podman[87679]: 2025-12-03 21:09:15.580980966 +0000 UTC m=+0.435699514 container remove 182336f97dc97be09dd763af6aaa47ea45eada61fce719fec02581e14c31ec75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-2-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 16:09:15 np0005544708 systemd[1]: libpod-conmon-182336f97dc97be09dd763af6aaa47ea45eada61fce719fec02581e14c31ec75.scope: Deactivated successfully.
Dec  3 16:09:15 np0005544708 ceph-mgr[75500]: [devicehealth INFO root] creating mgr pool
Dec  3 16:09:15 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Dec  3 16:09:15 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Dec  3 16:09:15 np0005544708 systemd[1]: Reloading.
Dec  3 16:09:16 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:09:16 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:09:16 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Dec  3 16:09:16 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  3 16:09:16 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/96508272,v1:192.168.122.100:6807/96508272]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  3 16:09:16 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec  3 16:09:16 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Dec  3 16:09:16 np0005544708 ceph-osd[87094]: osd.1 0 done with init, starting boot process
Dec  3 16:09:16 np0005544708 ceph-osd[87094]: osd.1 0 start_boot
Dec  3 16:09:16 np0005544708 ceph-osd[87094]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec  3 16:09:16 np0005544708 ceph-osd[87094]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec  3 16:09:16 np0005544708 ceph-osd[87094]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec  3 16:09:16 np0005544708 ceph-osd[87094]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec  3 16:09:16 np0005544708 ceph-osd[87094]: osd.1 0  bench count 12288000 bsize 4 KiB
Dec  3 16:09:16 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e10 crush map has features 3314933000852226048, adjusting msgr requires
Dec  3 16:09:16 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Dec  3 16:09:16 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Dec  3 16:09:16 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Dec  3 16:09:16 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Dec  3 16:09:16 np0005544708 ceph-osd[86059]: osd.0 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec  3 16:09:16 np0005544708 ceph-osd[86059]: osd.0 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Dec  3 16:09:16 np0005544708 ceph-osd[86059]: osd.0 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec  3 16:09:16 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  3 16:09:16 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec  3 16:09:16 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  3 16:09:16 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec  3 16:09:16 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Dec  3 16:09:16 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Dec  3 16:09:16 np0005544708 ceph-mgr[75500]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  3 16:09:16 np0005544708 ceph-mgr[75500]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 16:09:16 np0005544708 ceph-mgr[75500]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/96508272; not ready for session (expect reconnect)
Dec  3 16:09:16 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  3 16:09:16 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec  3 16:09:16 np0005544708 ceph-mgr[75500]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  3 16:09:16 np0005544708 systemd[1]: Reloading.
Dec  3 16:09:16 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:09:16 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:09:16 np0005544708 systemd[1]: Starting Ceph osd.2 for c21de27e-a7fd-594b-8324-0697ba9aab3a...
Dec  3 16:09:16 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Dec  3 16:09:16 np0005544708 ceph-mon[75204]: from='osd.1 [v2:192.168.122.100:6806/96508272,v1:192.168.122.100:6807/96508272]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  3 16:09:16 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec  3 16:09:16 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Dec  3 16:09:16 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e10 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:09:16 np0005544708 podman[87854]: 2025-12-03 21:09:16.824904697 +0000 UTC m=+0.066711033 container create fae8765a8ea84675703de8a96589308baab2490b03e534d120a72d0aeb771434 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-2-activate, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:09:16 np0005544708 podman[87854]: 2025-12-03 21:09:16.791142417 +0000 UTC m=+0.032948803 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:09:16 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:16 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/841fa57832501ab18683aa900518b0bea3e0c517c3ba104b10f9784ef74c8840/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:16 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/841fa57832501ab18683aa900518b0bea3e0c517c3ba104b10f9784ef74c8840/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:16 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/841fa57832501ab18683aa900518b0bea3e0c517c3ba104b10f9784ef74c8840/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:16 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/841fa57832501ab18683aa900518b0bea3e0c517c3ba104b10f9784ef74c8840/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:16 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/841fa57832501ab18683aa900518b0bea3e0c517c3ba104b10f9784ef74c8840/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:16 np0005544708 podman[87854]: 2025-12-03 21:09:16.944510002 +0000 UTC m=+0.186316328 container init fae8765a8ea84675703de8a96589308baab2490b03e534d120a72d0aeb771434 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-2-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec  3 16:09:16 np0005544708 podman[87854]: 2025-12-03 21:09:16.952679199 +0000 UTC m=+0.194485495 container start fae8765a8ea84675703de8a96589308baab2490b03e534d120a72d0aeb771434 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-2-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec  3 16:09:16 np0005544708 podman[87854]: 2025-12-03 21:09:16.966394209 +0000 UTC m=+0.208200515 container attach fae8765a8ea84675703de8a96589308baab2490b03e534d120a72d0aeb771434 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-2-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:09:17 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v29: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec  3 16:09:17 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-2-activate[87869]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 16:09:17 np0005544708 bash[87854]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 16:09:17 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Dec  3 16:09:17 np0005544708 ceph-mgr[75500]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/96508272; not ready for session (expect reconnect)
Dec  3 16:09:17 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  3 16:09:17 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec  3 16:09:17 np0005544708 ceph-mgr[75500]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  3 16:09:17 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec  3 16:09:17 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Dec  3 16:09:17 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Dec  3 16:09:17 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  3 16:09:17 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec  3 16:09:17 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  3 16:09:17 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec  3 16:09:17 np0005544708 ceph-mgr[75500]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  3 16:09:17 np0005544708 ceph-mgr[75500]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 16:09:17 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-2-activate[87869]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 16:09:17 np0005544708 bash[87854]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 16:09:17 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec  3 16:09:17 np0005544708 lvm[87955]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:09:17 np0005544708 lvm[87955]: VG ceph_vg1 finished
Dec  3 16:09:17 np0005544708 lvm[87954]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:09:17 np0005544708 lvm[87954]: VG ceph_vg0 finished
Dec  3 16:09:17 np0005544708 lvm[87957]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:09:17 np0005544708 lvm[87957]: VG ceph_vg2 finished
Dec  3 16:09:17 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-2-activate[87869]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec  3 16:09:17 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-2-activate[87869]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 16:09:17 np0005544708 bash[87854]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec  3 16:09:17 np0005544708 bash[87854]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 16:09:17 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-2-activate[87869]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 16:09:17 np0005544708 bash[87854]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  3 16:09:17 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-2-activate[87869]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  3 16:09:17 np0005544708 bash[87854]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  3 16:09:17 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-2-activate[87869]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Dec  3 16:09:17 np0005544708 bash[87854]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Dec  3 16:09:18 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-2-activate[87869]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec  3 16:09:18 np0005544708 bash[87854]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec  3 16:09:18 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-2-activate[87869]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Dec  3 16:09:18 np0005544708 bash[87854]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Dec  3 16:09:18 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-2-activate[87869]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec  3 16:09:18 np0005544708 bash[87854]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec  3 16:09:18 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-2-activate[87869]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  3 16:09:18 np0005544708 bash[87854]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  3 16:09:18 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-2-activate[87869]: --> ceph-volume lvm activate successful for osd ID: 2
Dec  3 16:09:18 np0005544708 bash[87854]: --> ceph-volume lvm activate successful for osd ID: 2
Dec  3 16:09:18 np0005544708 systemd[1]: libpod-fae8765a8ea84675703de8a96589308baab2490b03e534d120a72d0aeb771434.scope: Deactivated successfully.
Dec  3 16:09:18 np0005544708 systemd[1]: libpod-fae8765a8ea84675703de8a96589308baab2490b03e534d120a72d0aeb771434.scope: Consumed 1.633s CPU time.
Dec  3 16:09:18 np0005544708 podman[88054]: 2025-12-03 21:09:18.162055904 +0000 UTC m=+0.033344522 container died fae8765a8ea84675703de8a96589308baab2490b03e534d120a72d0aeb771434 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec  3 16:09:18 np0005544708 ceph-mgr[75500]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/96508272; not ready for session (expect reconnect)
Dec  3 16:09:18 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  3 16:09:18 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec  3 16:09:18 np0005544708 ceph-mgr[75500]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  3 16:09:18 np0005544708 systemd[1]: var-lib-containers-storage-overlay-841fa57832501ab18683aa900518b0bea3e0c517c3ba104b10f9784ef74c8840-merged.mount: Deactivated successfully.
Dec  3 16:09:18 np0005544708 podman[88054]: 2025-12-03 21:09:18.275201126 +0000 UTC m=+0.146489694 container remove fae8765a8ea84675703de8a96589308baab2490b03e534d120a72d0aeb771434 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-2-activate, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Dec  3 16:09:18 np0005544708 podman[88110]: 2025-12-03 21:09:18.521088711 +0000 UTC m=+0.059622929 container create f54ced40cf6e3cfd17117967711c6e8c3d0af1904d4dd52bc53b3908241174db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:09:18 np0005544708 podman[88110]: 2025-12-03 21:09:18.488358542 +0000 UTC m=+0.026892850 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:09:18 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c66876297b08a692412578d395f689e9d9d959b66002228d001a170918bf222a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:18 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c66876297b08a692412578d395f689e9d9d959b66002228d001a170918bf222a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:18 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c66876297b08a692412578d395f689e9d9d959b66002228d001a170918bf222a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:18 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c66876297b08a692412578d395f689e9d9d959b66002228d001a170918bf222a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:18 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c66876297b08a692412578d395f689e9d9d959b66002228d001a170918bf222a/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:18 np0005544708 podman[88110]: 2025-12-03 21:09:18.641003312 +0000 UTC m=+0.179537530 container init f54ced40cf6e3cfd17117967711c6e8c3d0af1904d4dd52bc53b3908241174db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:09:18 np0005544708 podman[88110]: 2025-12-03 21:09:18.648207589 +0000 UTC m=+0.186741797 container start f54ced40cf6e3cfd17117967711c6e8c3d0af1904d4dd52bc53b3908241174db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Dec  3 16:09:18 np0005544708 bash[88110]: f54ced40cf6e3cfd17117967711c6e8c3d0af1904d4dd52bc53b3908241174db
Dec  3 16:09:18 np0005544708 systemd[1]: Started Ceph osd.2 for c21de27e-a7fd-594b-8324-0697ba9aab3a.
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: set uid:gid to 167:167 (ceph:ceph)
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: pidfile_write: ignore empty --pid-file
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07624000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07624000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07624000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07624000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07624000 /var/lib/ceph/osd/ceph-2/block) close
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07624000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07624000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07624000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07624000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07624000 /var/lib/ceph/osd/ceph-2/block) close
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07624000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07624000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07624000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07624000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07624000 /var/lib/ceph/osd/ceph-2/block) close
Dec  3 16:09:18 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07624000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07624000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07624000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07624000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07624000 /var/lib/ceph/osd/ceph-2/block) close
Dec  3 16:09:18 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:18 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07624000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07624000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07624000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07624000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07624000 /var/lib/ceph/osd/ceph-2/block) close
Dec  3 16:09:18 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07624000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07624000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07624000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07624000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07624400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07624400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07624400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07624400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07624400 /var/lib/ceph/osd/ceph-2/block) close
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07624000 /var/lib/ceph/osd/ceph-2/block) close
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: load: jerasure load: lrc 
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07625c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07625c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07625c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07625c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07625c00 /var/lib/ceph/osd/ceph-2/block) close
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07625c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07625c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07625c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07625c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07625c00 /var/lib/ceph/osd/ceph-2/block) close
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07625c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07625c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07625c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07625c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07625c00 /var/lib/ceph/osd/ceph-2/block) close
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07625c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07625c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07625c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07625c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07625c00 /var/lib/ceph/osd/ceph-2/block) close
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07625c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07625c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07625c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07625c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:18 np0005544708 ceph-osd[88129]: bdev(0x559f07625c00 /var/lib/ceph/osd/ceph-2/block) close
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bdev(0x559f07625c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bdev(0x559f07625c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bdev(0x559f07625c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bdev(0x559f07625c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bdev(0x559f082bb800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bdev(0x559f082bb800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bdev(0x559f082bb800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bdev(0x559f082bb800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bluefs mount
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bluefs mount shared_bdev_used = 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: RocksDB version: 7.9.2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Git sha 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Compile date 2025-10-30 15:42:43
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: DB SUMMARY
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: DB Session ID:  95T2HRKJRJBHLQ1U4F3O
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: CURRENT file:  CURRENT
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: IDENTITY file:  IDENTITY
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                         Options.error_if_exists: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                       Options.create_if_missing: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                         Options.paranoid_checks: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                                     Options.env: 0x559f074b5ea0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                                Options.info_log: 0x559f085068a0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.max_file_opening_threads: 16
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                              Options.statistics: (nil)
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                               Options.use_fsync: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                       Options.max_log_file_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                         Options.allow_fallocate: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.use_direct_reads: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.create_missing_column_families: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                              Options.db_log_dir: 
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                                 Options.wal_dir: db.wal
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.advise_random_on_open: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.write_buffer_manager: 0x559f0751ab40
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                            Options.rate_limiter: (nil)
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.unordered_write: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                               Options.row_cache: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                              Options.wal_filter: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.allow_ingest_behind: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.two_write_queues: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.manual_wal_flush: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.wal_compression: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.atomic_flush: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                 Options.log_readahead_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.allow_data_in_errors: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.db_host_id: __hostname__
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.max_background_jobs: 4
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.max_background_compactions: -1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.max_subcompactions: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.max_open_files: -1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.bytes_per_sync: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.max_background_flushes: -1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Compression algorithms supported:
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: #011kZSTD supported: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: #011kXpressCompression supported: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: #011kBZip2Compression supported: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: #011kLZ4Compression supported: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: #011kZlibCompression supported: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: #011kSnappyCompression supported: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f08506c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559f074b98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f08506c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559f074b98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f08506c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559f074b98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f08506c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559f074b98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f08506c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559f074b98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f08506c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559f074b98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f08506c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559f074b98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f08506c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559f074b9a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f08506c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559f074b9a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f08506c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559f074b9a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5d37ca13-9b22-4d6f-b7f5-d136582f32d0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796159033233, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796159034800, "job": 1, "event": "recovery_finished"}
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: freelist init
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: freelist _read_cfg
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bluefs umount
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bdev(0x559f082bb800 /var/lib/ceph/osd/ceph-2/block) close
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bdev(0x559f082bb800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bdev(0x559f082bb800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bdev(0x559f082bb800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bdev(0x559f082bb800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bluefs mount
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bluefs mount shared_bdev_used = 27262976
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: RocksDB version: 7.9.2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Git sha 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Compile date 2025-10-30 15:42:43
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: DB SUMMARY
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: DB Session ID:  95T2HRKJRJBHLQ1U4F3P
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: CURRENT file:  CURRENT
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: IDENTITY file:  IDENTITY
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                         Options.error_if_exists: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                       Options.create_if_missing: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                         Options.paranoid_checks: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                                     Options.env: 0x559f086d6a80
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                                Options.info_log: 0x559f0853b7c0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.max_file_opening_threads: 16
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                              Options.statistics: (nil)
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                               Options.use_fsync: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                       Options.max_log_file_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                         Options.allow_fallocate: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.use_direct_reads: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.create_missing_column_families: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                              Options.db_log_dir: 
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                                 Options.wal_dir: db.wal
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.advise_random_on_open: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.write_buffer_manager: 0x559f0751b900
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                            Options.rate_limiter: (nil)
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.unordered_write: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                               Options.row_cache: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                              Options.wal_filter: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.allow_ingest_behind: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.two_write_queues: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.manual_wal_flush: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.wal_compression: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.atomic_flush: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                 Options.log_readahead_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.allow_data_in_errors: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.db_host_id: __hostname__
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.max_background_jobs: 4
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.max_background_compactions: -1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.max_subcompactions: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.max_open_files: -1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.bytes_per_sync: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.max_background_flushes: -1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Compression algorithms supported:
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: #011kZSTD supported: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: #011kXpressCompression supported: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: #011kBZip2Compression supported: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: #011kLZ4Compression supported: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: #011kZlibCompression supported: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: #011kSnappyCompression supported: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f08506bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559f074b98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f08506bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559f074b98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f08506bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559f074b98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f08506bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559f074b98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f08506bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559f074b98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f08506bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559f074b98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f08506bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559f074b98d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f085070c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559f074b9a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f085070c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559f074b9a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:           Options.merge_operator: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.compaction_filter: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.compaction_filter_factory: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.sst_partitioner_factory: None
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559f085070c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559f074b9a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.write_buffer_size: 16777216
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.max_write_buffer_number: 64
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.compression: LZ4
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.prefix_extractor: nullptr
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.num_levels: 7
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.compression_opts.level: 32767
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.compression_opts.strategy: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                  Options.compression_opts.enabled: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.arena_block_size: 1048576
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.disable_auto_compactions: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.inplace_update_support: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                           Options.bloom_locality: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                    Options.max_successive_merges: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.paranoid_file_checks: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.force_consistency_checks: 1
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.report_bg_io_stats: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                               Options.ttl: 2592000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                       Options.enable_blob_files: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                           Options.min_blob_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                          Options.blob_file_size: 268435456
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb:                Options.blob_file_starting_level: 0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5d37ca13-9b22-4d6f-b7f5-d136582f32d0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796159096395, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796159101160, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764796159, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d37ca13-9b22-4d6f-b7f5-d136582f32d0", "db_session_id": "95T2HRKJRJBHLQ1U4F3P", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796159111443, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764796159, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d37ca13-9b22-4d6f-b7f5-d136582f32d0", "db_session_id": "95T2HRKJRJBHLQ1U4F3P", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796159113999, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764796159, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d37ca13-9b22-4d6f-b7f5-d136582f32d0", "db_session_id": "95T2HRKJRJBHLQ1U4F3P", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796159122382, "job": 1, "event": "recovery_finished"}
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec  3 16:09:19 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v31: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x559f08720000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: DB pointer 0x559f086c0000
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559f074b98d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559f074b98d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559f074b98d0#2 capacity: 460.80 MB usag
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: _get_class not permitted to load lua
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: _get_class not permitted to load sdk
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: osd.2 0 load_pgs
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: osd.2 0 load_pgs opened 0 pgs
Dec  3 16:09:19 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-2[88125]: 2025-12-03T21:09:19.177+0000 7f53bfdc88c0 -1 osd.2 0 log_to_monitors true
Dec  3 16:09:19 np0005544708 ceph-osd[88129]: osd.2 0 log_to_monitors true
Dec  3 16:09:19 np0005544708 ceph-mgr[75500]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/96508272; not ready for session (expect reconnect)
Dec  3 16:09:19 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  3 16:09:19 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec  3 16:09:19 np0005544708 ceph-mgr[75500]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  3 16:09:19 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Dec  3 16:09:19 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/563643625,v1:192.168.122.100:6811/563643625]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Dec  3 16:09:19 np0005544708 podman[88643]: 2025-12-03 21:09:19.287631166 +0000 UTC m=+0.048196026 container create 89d985d660b3b30b51c3fea31304ab2ebe184ff5ac88445d0b1d3ca68855b0b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_bhabha, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:09:19 np0005544708 systemd[1]: Started libpod-conmon-89d985d660b3b30b51c3fea31304ab2ebe184ff5ac88445d0b1d3ca68855b0b3.scope.
Dec  3 16:09:19 np0005544708 podman[88643]: 2025-12-03 21:09:19.263648286 +0000 UTC m=+0.024213146 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:09:19 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:19 np0005544708 podman[88643]: 2025-12-03 21:09:19.399344459 +0000 UTC m=+0.159909309 container init 89d985d660b3b30b51c3fea31304ab2ebe184ff5ac88445d0b1d3ca68855b0b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:09:19 np0005544708 ceph-osd[87094]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 28.185 iops: 7215.454 elapsed_sec: 0.416
Dec  3 16:09:19 np0005544708 ceph-osd[87094]: log_channel(cluster) log [WRN] : OSD bench result of 7215.453955 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  3 16:09:19 np0005544708 ceph-osd[87094]: osd.1 0 waiting for initial osdmap
Dec  3 16:09:19 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-1[87090]: 2025-12-03T21:09:19.399+0000 7f53ebeba640 -1 osd.1 0 waiting for initial osdmap
Dec  3 16:09:19 np0005544708 podman[88643]: 2025-12-03 21:09:19.407590037 +0000 UTC m=+0.168154877 container start 89d985d660b3b30b51c3fea31304ab2ebe184ff5ac88445d0b1d3ca68855b0b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_bhabha, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Dec  3 16:09:19 np0005544708 systemd[1]: libpod-89d985d660b3b30b51c3fea31304ab2ebe184ff5ac88445d0b1d3ca68855b0b3.scope: Deactivated successfully.
Dec  3 16:09:19 np0005544708 dazzling_bhabha[88659]: 167 167
Dec  3 16:09:19 np0005544708 conmon[88659]: conmon 89d985d660b3b30b51c3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-89d985d660b3b30b51c3fea31304ab2ebe184ff5ac88445d0b1d3ca68855b0b3.scope/container/memory.events
Dec  3 16:09:19 np0005544708 ceph-osd[87094]: osd.1 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec  3 16:09:19 np0005544708 ceph-osd[87094]: osd.1 11 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Dec  3 16:09:19 np0005544708 ceph-osd[87094]: osd.1 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec  3 16:09:19 np0005544708 ceph-osd[87094]: osd.1 11 check_osdmap_features require_osd_release unknown -> tentacle
Dec  3 16:09:19 np0005544708 podman[88643]: 2025-12-03 21:09:19.416064041 +0000 UTC m=+0.176628881 container attach 89d985d660b3b30b51c3fea31304ab2ebe184ff5ac88445d0b1d3ca68855b0b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_bhabha, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:09:19 np0005544708 podman[88643]: 2025-12-03 21:09:19.416423578 +0000 UTC m=+0.176988418 container died 89d985d660b3b30b51c3fea31304ab2ebe184ff5ac88445d0b1d3ca68855b0b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_bhabha, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:09:19 np0005544708 ceph-osd[87094]: osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  3 16:09:19 np0005544708 ceph-osd[87094]: osd.1 11 set_numa_affinity not setting numa affinity
Dec  3 16:09:19 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-1[87090]: 2025-12-03T21:09:19.428+0000 7f53e6cbf640 -1 osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  3 16:09:19 np0005544708 ceph-osd[87094]: osd.1 11 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial no unique device path for loop4: no symlink to loop4 in /dev/disk/by-path
Dec  3 16:09:19 np0005544708 systemd[1]: var-lib-containers-storage-overlay-076f3776f9582382c44a1592f150237b9f1089ff3d4ef162232615b62d092623-merged.mount: Deactivated successfully.
Dec  3 16:09:19 np0005544708 podman[88643]: 2025-12-03 21:09:19.471778639 +0000 UTC m=+0.232343479 container remove 89d985d660b3b30b51c3fea31304ab2ebe184ff5ac88445d0b1d3ca68855b0b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  3 16:09:19 np0005544708 systemd[1]: libpod-conmon-89d985d660b3b30b51c3fea31304ab2ebe184ff5ac88445d0b1d3ca68855b0b3.scope: Deactivated successfully.
Dec  3 16:09:19 np0005544708 podman[88683]: 2025-12-03 21:09:19.624403529 +0000 UTC m=+0.047582314 container create 64484685d43764b16eacfee894dc8b6359e3f105d33136f2b53233df5aaddd0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_jones, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:09:19 np0005544708 systemd[1]: Started libpod-conmon-64484685d43764b16eacfee894dc8b6359e3f105d33136f2b53233df5aaddd0e.scope.
Dec  3 16:09:19 np0005544708 podman[88683]: 2025-12-03 21:09:19.603911589 +0000 UTC m=+0.027090394 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:09:19 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:19 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89d6cc1050b3e1e425814b298a9067592341e4c8e1345bd8a1e3cc4dd42d882b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:19 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89d6cc1050b3e1e425814b298a9067592341e4c8e1345bd8a1e3cc4dd42d882b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:19 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89d6cc1050b3e1e425814b298a9067592341e4c8e1345bd8a1e3cc4dd42d882b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:19 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89d6cc1050b3e1e425814b298a9067592341e4c8e1345bd8a1e3cc4dd42d882b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:19 np0005544708 podman[88683]: 2025-12-03 21:09:19.729054957 +0000 UTC m=+0.152233792 container init 64484685d43764b16eacfee894dc8b6359e3f105d33136f2b53233df5aaddd0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_jones, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:09:19 np0005544708 podman[88683]: 2025-12-03 21:09:19.74482061 +0000 UTC m=+0.167999435 container start 64484685d43764b16eacfee894dc8b6359e3f105d33136f2b53233df5aaddd0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_jones, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  3 16:09:19 np0005544708 podman[88683]: 2025-12-03 21:09:19.749325791 +0000 UTC m=+0.172504626 container attach 64484685d43764b16eacfee894dc8b6359e3f105d33136f2b53233df5aaddd0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_jones, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:09:19 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:19 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:19 np0005544708 ceph-mon[75204]: from='osd.2 [v2:192.168.122.100:6810/563643625,v1:192.168.122.100:6811/563643625]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Dec  3 16:09:19 np0005544708 ceph-mon[75204]: OSD bench result of 7215.453955 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  3 16:09:19 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Dec  3 16:09:19 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/563643625,v1:192.168.122.100:6811/563643625]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec  3 16:09:19 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e12 e12: 3 total, 2 up, 3 in
Dec  3 16:09:19 np0005544708 ceph-mon[75204]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/96508272,v1:192.168.122.100:6807/96508272] boot
Dec  3 16:09:19 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 2 up, 3 in
Dec  3 16:09:19 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Dec  3 16:09:19 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/563643625,v1:192.168.122.100:6811/563643625]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec  3 16:09:19 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e12 create-or-move crush item name 'osd.2' initial_weight 0.02 at location {host=compute-0,root=default}
Dec  3 16:09:19 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  3 16:09:19 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec  3 16:09:19 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  3 16:09:19 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec  3 16:09:19 np0005544708 ceph-mgr[75500]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 16:09:19 np0005544708 ceph-osd[87094]: osd.1 12 state: booting -> active
Dec  3 16:09:19 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 12 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:09:20 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec  3 16:09:20 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec  3 16:09:20 np0005544708 lvm[88774]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:09:20 np0005544708 lvm[88774]: VG ceph_vg0 finished
Dec  3 16:09:20 np0005544708 lvm[88776]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:09:20 np0005544708 lvm[88776]: VG ceph_vg1 finished
Dec  3 16:09:20 np0005544708 lvm[88777]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:09:20 np0005544708 lvm[88777]: VG ceph_vg2 finished
Dec  3 16:09:20 np0005544708 stoic_jones[88699]: {}
Dec  3 16:09:20 np0005544708 systemd[1]: libpod-64484685d43764b16eacfee894dc8b6359e3f105d33136f2b53233df5aaddd0e.scope: Deactivated successfully.
Dec  3 16:09:20 np0005544708 systemd[1]: libpod-64484685d43764b16eacfee894dc8b6359e3f105d33136f2b53233df5aaddd0e.scope: Consumed 1.407s CPU time.
Dec  3 16:09:20 np0005544708 podman[88683]: 2025-12-03 21:09:20.649179732 +0000 UTC m=+1.072358527 container died 64484685d43764b16eacfee894dc8b6359e3f105d33136f2b53233df5aaddd0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:09:20 np0005544708 systemd[1]: var-lib-containers-storage-overlay-89d6cc1050b3e1e425814b298a9067592341e4c8e1345bd8a1e3cc4dd42d882b-merged.mount: Deactivated successfully.
Dec  3 16:09:20 np0005544708 podman[88683]: 2025-12-03 21:09:20.704267127 +0000 UTC m=+1.127445922 container remove 64484685d43764b16eacfee894dc8b6359e3f105d33136f2b53233df5aaddd0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_jones, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec  3 16:09:20 np0005544708 systemd[1]: libpod-conmon-64484685d43764b16eacfee894dc8b6359e3f105d33136f2b53233df5aaddd0e.scope: Deactivated successfully.
Dec  3 16:09:20 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:09:20 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:20 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:09:20 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:20 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Dec  3 16:09:20 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/563643625,v1:192.168.122.100:6811/563643625]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  3 16:09:20 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e13 e13: 3 total, 2 up, 3 in
Dec  3 16:09:20 np0005544708 ceph-osd[88129]: osd.2 0 done with init, starting boot process
Dec  3 16:09:20 np0005544708 ceph-osd[88129]: osd.2 0 start_boot
Dec  3 16:09:20 np0005544708 ceph-osd[88129]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec  3 16:09:20 np0005544708 ceph-osd[88129]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec  3 16:09:20 np0005544708 ceph-osd[88129]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec  3 16:09:20 np0005544708 ceph-osd[88129]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec  3 16:09:20 np0005544708 ceph-osd[88129]: osd.2 0  bench count 12288000 bsize 4 KiB
Dec  3 16:09:20 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 2 up, 3 in
Dec  3 16:09:20 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  3 16:09:20 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec  3 16:09:20 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=12/13 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:09:20 np0005544708 ceph-mgr[75500]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 16:09:20 np0005544708 ceph-mon[75204]: from='osd.2 [v2:192.168.122.100:6810/563643625,v1:192.168.122.100:6811/563643625]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec  3 16:09:20 np0005544708 ceph-mon[75204]: osd.1 [v2:192.168.122.100:6806/96508272,v1:192.168.122.100:6807/96508272] boot
Dec  3 16:09:20 np0005544708 ceph-mon[75204]: from='osd.2 [v2:192.168.122.100:6810/563643625,v1:192.168.122.100:6811/563643625]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec  3 16:09:20 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:20 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:20 np0005544708 ceph-mgr[75500]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/563643625; not ready for session (expect reconnect)
Dec  3 16:09:20 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  3 16:09:20 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec  3 16:09:20 np0005544708 ceph-mgr[75500]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 16:09:20 np0005544708 ceph-mgr[75500]: [devicehealth INFO root] creating main.db for devicehealth
Dec  3 16:09:21 np0005544708 ceph-mgr[75500]: [devicehealth INFO root] Check health
Dec  3 16:09:21 np0005544708 ceph-mgr[75500]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Dec  3 16:09:21 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec  3 16:09:21 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v34: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec  3 16:09:21 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec  3 16:09:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  3 16:09:21 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec  3 16:09:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Optimize plan auto_2025-12-03_21:09:21
Dec  3 16:09:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 16:09:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Some PGs (1.000000) are inactive; try again later
Dec  3 16:09:21 np0005544708 podman[88925]: 2025-12-03 21:09:21.509292719 +0000 UTC m=+0.083584949 container exec 5be1cf87f4450717250955c63021ef41dba8c1099adb634f3e1f6a0b130fa916 (image=quay.io/ceph/ceph:v20, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mon-compute-0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Dec  3 16:09:21 np0005544708 podman[88925]: 2025-12-03 21:09:21.60089898 +0000 UTC m=+0.175191240 container exec_died 5be1cf87f4450717250955c63021ef41dba8c1099adb634f3e1f6a0b130fa916 (image=quay.io/ceph/ceph:v20, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mon-compute-0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  3 16:09:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:09:21 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 16:09:21 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec  3 16:09:21 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 1 (current 1)
Dec  3 16:09:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:09:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:09:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 16:09:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 16:09:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:09:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:09:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:09:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:09:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Dec  3 16:09:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Dec  3 16:09:21 np0005544708 ceph-mgr[75500]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/563643625; not ready for session (expect reconnect)
Dec  3 16:09:21 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Dec  3 16:09:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  3 16:09:21 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec  3 16:09:21 np0005544708 ceph-mgr[75500]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 16:09:21 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.jxauqt(active, since 60s)
Dec  3 16:09:21 np0005544708 ceph-mon[75204]: from='osd.2 [v2:192.168.122.100:6810/563643625,v1:192.168.122.100:6811/563643625]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  3 16:09:21 np0005544708 ceph-mon[75204]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec  3 16:09:21 np0005544708 ceph-mon[75204]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec  3 16:09:22 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:09:22 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:22 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:09:22 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:22 np0005544708 ceph-mgr[75500]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/563643625; not ready for session (expect reconnect)
Dec  3 16:09:22 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  3 16:09:22 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec  3 16:09:22 np0005544708 ceph-mgr[75500]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 16:09:22 np0005544708 podman[89137]: 2025-12-03 21:09:22.859750267 +0000 UTC m=+0.058983156 container create eadc40c29fbd4bab5aaefb960ef56704f8904ab0760d1bdf019d43755fa9dc70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:09:22 np0005544708 systemd[1]: Started libpod-conmon-eadc40c29fbd4bab5aaefb960ef56704f8904ab0760d1bdf019d43755fa9dc70.scope.
Dec  3 16:09:22 np0005544708 podman[89137]: 2025-12-03 21:09:22.829922847 +0000 UTC m=+0.029155846 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:09:22 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:22 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:22 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:22 np0005544708 podman[89137]: 2025-12-03 21:09:22.962384724 +0000 UTC m=+0.161617633 container init eadc40c29fbd4bab5aaefb960ef56704f8904ab0760d1bdf019d43755fa9dc70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_kepler, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec  3 16:09:22 np0005544708 podman[89137]: 2025-12-03 21:09:22.972373109 +0000 UTC m=+0.171605998 container start eadc40c29fbd4bab5aaefb960ef56704f8904ab0760d1bdf019d43755fa9dc70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_kepler, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:09:22 np0005544708 objective_kepler[89153]: 167 167
Dec  3 16:09:22 np0005544708 systemd[1]: libpod-eadc40c29fbd4bab5aaefb960ef56704f8904ab0760d1bdf019d43755fa9dc70.scope: Deactivated successfully.
Dec  3 16:09:22 np0005544708 conmon[89153]: conmon eadc40c29fbd4bab5aae <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eadc40c29fbd4bab5aaefb960ef56704f8904ab0760d1bdf019d43755fa9dc70.scope/container/memory.events
Dec  3 16:09:22 np0005544708 podman[89137]: 2025-12-03 21:09:22.993259335 +0000 UTC m=+0.192492244 container attach eadc40c29fbd4bab5aaefb960ef56704f8904ab0760d1bdf019d43755fa9dc70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:09:22 np0005544708 podman[89137]: 2025-12-03 21:09:22.993611312 +0000 UTC m=+0.192844191 container died eadc40c29fbd4bab5aaefb960ef56704f8904ab0760d1bdf019d43755fa9dc70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_kepler, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:09:23 np0005544708 systemd[1]: var-lib-containers-storage-overlay-46c2880fd10ce639b18faef50696152f9bccbc756b9c4eb10b32f5541eccade6-merged.mount: Deactivated successfully.
Dec  3 16:09:23 np0005544708 podman[89137]: 2025-12-03 21:09:23.125633731 +0000 UTC m=+0.324866620 container remove eadc40c29fbd4bab5aaefb960ef56704f8904ab0760d1bdf019d43755fa9dc70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:09:23 np0005544708 systemd[1]: libpod-conmon-eadc40c29fbd4bab5aaefb960ef56704f8904ab0760d1bdf019d43755fa9dc70.scope: Deactivated successfully.
Dec  3 16:09:23 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v36: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec  3 16:09:23 np0005544708 podman[89188]: 2025-12-03 21:09:23.330896855 +0000 UTC m=+0.079521166 container create 4e464f41b43a8694d885396947f179f332e72481d93b792055effec957f5f58e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec  3 16:09:23 np0005544708 systemd[1]: Started libpod-conmon-4e464f41b43a8694d885396947f179f332e72481d93b792055effec957f5f58e.scope.
Dec  3 16:09:23 np0005544708 podman[89188]: 2025-12-03 21:09:23.281086697 +0000 UTC m=+0.029711038 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:09:23 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:23 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/720e15975c0d0b7ce12c9bc415c5afbf6282ccce5fbd87d4f61b425435e1c855/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:23 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/720e15975c0d0b7ce12c9bc415c5afbf6282ccce5fbd87d4f61b425435e1c855/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:23 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/720e15975c0d0b7ce12c9bc415c5afbf6282ccce5fbd87d4f61b425435e1c855/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:23 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/720e15975c0d0b7ce12c9bc415c5afbf6282ccce5fbd87d4f61b425435e1c855/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:23 np0005544708 python3[89216]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid c21de27e-a7fd-594b-8324-0697ba9aab3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:09:23 np0005544708 podman[89188]: 2025-12-03 21:09:23.436050614 +0000 UTC m=+0.184675005 container init 4e464f41b43a8694d885396947f179f332e72481d93b792055effec957f5f58e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec  3 16:09:23 np0005544708 podman[89188]: 2025-12-03 21:09:23.447418376 +0000 UTC m=+0.196042687 container start 4e464f41b43a8694d885396947f179f332e72481d93b792055effec957f5f58e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_einstein, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec  3 16:09:23 np0005544708 podman[89188]: 2025-12-03 21:09:23.461102046 +0000 UTC m=+0.209726377 container attach 4e464f41b43a8694d885396947f179f332e72481d93b792055effec957f5f58e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_einstein, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:09:23 np0005544708 podman[89226]: 2025-12-03 21:09:23.515371105 +0000 UTC m=+0.077069756 container create 19a9774ed7730953ef577f760610f6a67d4c1a4aeb8c74c9b1b8206df763a9c6 (image=quay.io/ceph/ceph:v20, name=magical_taussig, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:09:23 np0005544708 systemd[1]: Started libpod-conmon-19a9774ed7730953ef577f760610f6a67d4c1a4aeb8c74c9b1b8206df763a9c6.scope.
Dec  3 16:09:23 np0005544708 podman[89226]: 2025-12-03 21:09:23.470212593 +0000 UTC m=+0.031911324 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:09:23 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:23 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/873fe2228dc62cfe9aed1332bdfed7a339a959c1d31123a871a0ee7e7081789a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:23 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/873fe2228dc62cfe9aed1332bdfed7a339a959c1d31123a871a0ee7e7081789a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:23 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/873fe2228dc62cfe9aed1332bdfed7a339a959c1d31123a871a0ee7e7081789a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:23 np0005544708 podman[89226]: 2025-12-03 21:09:23.615533552 +0000 UTC m=+0.177232313 container init 19a9774ed7730953ef577f760610f6a67d4c1a4aeb8c74c9b1b8206df763a9c6 (image=quay.io/ceph/ceph:v20, name=magical_taussig, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:09:23 np0005544708 podman[89226]: 2025-12-03 21:09:23.62278348 +0000 UTC m=+0.184482131 container start 19a9774ed7730953ef577f760610f6a67d4c1a4aeb8c74c9b1b8206df763a9c6 (image=quay.io/ceph/ceph:v20, name=magical_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:09:23 np0005544708 podman[89226]: 2025-12-03 21:09:23.646773601 +0000 UTC m=+0.208472272 container attach 19a9774ed7730953ef577f760610f6a67d4c1a4aeb8c74c9b1b8206df763a9c6 (image=quay.io/ceph/ceph:v20, name=magical_taussig, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:09:23 np0005544708 ceph-mgr[75500]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/563643625; not ready for session (expect reconnect)
Dec  3 16:09:23 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  3 16:09:23 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec  3 16:09:23 np0005544708 ceph-mgr[75500]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 16:09:24 np0005544708 bold_einstein[89222]: [
Dec  3 16:09:24 np0005544708 bold_einstein[89222]:    {
Dec  3 16:09:24 np0005544708 bold_einstein[89222]:        "available": false,
Dec  3 16:09:24 np0005544708 bold_einstein[89222]:        "being_replaced": false,
Dec  3 16:09:24 np0005544708 bold_einstein[89222]:        "ceph_device_lvm": false,
Dec  3 16:09:24 np0005544708 bold_einstein[89222]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec  3 16:09:24 np0005544708 bold_einstein[89222]:        "lsm_data": {},
Dec  3 16:09:24 np0005544708 bold_einstein[89222]:        "lvs": [],
Dec  3 16:09:24 np0005544708 bold_einstein[89222]:        "path": "/dev/sr0",
Dec  3 16:09:24 np0005544708 bold_einstein[89222]:        "rejected_reasons": [
Dec  3 16:09:24 np0005544708 bold_einstein[89222]:            "Insufficient space (<5GB)",
Dec  3 16:09:24 np0005544708 bold_einstein[89222]:            "Has a FileSystem"
Dec  3 16:09:24 np0005544708 bold_einstein[89222]:        ],
Dec  3 16:09:24 np0005544708 bold_einstein[89222]:        "sys_api": {
Dec  3 16:09:24 np0005544708 bold_einstein[89222]:            "actuators": null,
Dec  3 16:09:24 np0005544708 bold_einstein[89222]:            "device_nodes": [
Dec  3 16:09:24 np0005544708 bold_einstein[89222]:                "sr0"
Dec  3 16:09:24 np0005544708 bold_einstein[89222]:            ],
Dec  3 16:09:24 np0005544708 bold_einstein[89222]:            "devname": "sr0",
Dec  3 16:09:24 np0005544708 bold_einstein[89222]:            "human_readable_size": "482.00 KB",
Dec  3 16:09:24 np0005544708 bold_einstein[89222]:            "id_bus": "ata",
Dec  3 16:09:24 np0005544708 bold_einstein[89222]:            "model": "QEMU DVD-ROM",
Dec  3 16:09:24 np0005544708 bold_einstein[89222]:            "nr_requests": "2",
Dec  3 16:09:24 np0005544708 bold_einstein[89222]:            "parent": "/dev/sr0",
Dec  3 16:09:24 np0005544708 bold_einstein[89222]:            "partitions": {},
Dec  3 16:09:24 np0005544708 bold_einstein[89222]:            "path": "/dev/sr0",
Dec  3 16:09:24 np0005544708 bold_einstein[89222]:            "removable": "1",
Dec  3 16:09:24 np0005544708 bold_einstein[89222]:            "rev": "2.5+",
Dec  3 16:09:24 np0005544708 bold_einstein[89222]:            "ro": "0",
Dec  3 16:09:24 np0005544708 bold_einstein[89222]:            "rotational": "1",
Dec  3 16:09:24 np0005544708 bold_einstein[89222]:            "sas_address": "",
Dec  3 16:09:24 np0005544708 bold_einstein[89222]:            "sas_device_handle": "",
Dec  3 16:09:24 np0005544708 bold_einstein[89222]:            "scheduler_mode": "mq-deadline",
Dec  3 16:09:24 np0005544708 bold_einstein[89222]:            "sectors": 0,
Dec  3 16:09:24 np0005544708 bold_einstein[89222]:            "sectorsize": "2048",
Dec  3 16:09:24 np0005544708 bold_einstein[89222]:            "size": 493568.0,
Dec  3 16:09:24 np0005544708 bold_einstein[89222]:            "support_discard": "2048",
Dec  3 16:09:24 np0005544708 bold_einstein[89222]:            "type": "disk",
Dec  3 16:09:24 np0005544708 bold_einstein[89222]:            "vendor": "QEMU"
Dec  3 16:09:24 np0005544708 bold_einstein[89222]:        }
Dec  3 16:09:24 np0005544708 bold_einstein[89222]:    }
Dec  3 16:09:24 np0005544708 bold_einstein[89222]: ]
Dec  3 16:09:24 np0005544708 systemd[1]: libpod-4e464f41b43a8694d885396947f179f332e72481d93b792055effec957f5f58e.scope: Deactivated successfully.
Dec  3 16:09:24 np0005544708 podman[89188]: 2025-12-03 21:09:24.083324671 +0000 UTC m=+0.831948972 container died 4e464f41b43a8694d885396947f179f332e72481d93b792055effec957f5f58e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec  3 16:09:24 np0005544708 systemd[1]: var-lib-containers-storage-overlay-720e15975c0d0b7ce12c9bc415c5afbf6282ccce5fbd87d4f61b425435e1c855-merged.mount: Deactivated successfully.
Dec  3 16:09:24 np0005544708 podman[89188]: 2025-12-03 21:09:24.156016297 +0000 UTC m=+0.904640598 container remove 4e464f41b43a8694d885396947f179f332e72481d93b792055effec957f5f58e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_einstein, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:09:24 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec  3 16:09:24 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/573438380' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec  3 16:09:24 np0005544708 magical_taussig[89244]: 
Dec  3 16:09:24 np0005544708 magical_taussig[89244]: {"fsid":"c21de27e-a7fd-594b-8324-0697ba9aab3a","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":82,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":14,"num_osds":3,"num_up_osds":2,"osd_up_since":1764796159,"num_in_osds":3,"osd_in_since":1764796141,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"creating+peering","count":1}],"num_pgs":1,"num_pools":1,"num_objects":0,"data_bytes":0,"bytes_used":894058496,"bytes_avail":42047225856,"bytes_total":42941284352,"inactive_pgs_ratio":1},"fsmap":{"epoch":1,"btime":"2025-12-03T21:07:59:373870+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-03T21:09:23.137474+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Dec  3 16:09:24 np0005544708 systemd[1]: libpod-conmon-4e464f41b43a8694d885396947f179f332e72481d93b792055effec957f5f58e.scope: Deactivated successfully.
Dec  3 16:09:24 np0005544708 systemd[1]: libpod-19a9774ed7730953ef577f760610f6a67d4c1a4aeb8c74c9b1b8206df763a9c6.scope: Deactivated successfully.
Dec  3 16:09:24 np0005544708 podman[89226]: 2025-12-03 21:09:24.185660653 +0000 UTC m=+0.747359304 container died 19a9774ed7730953ef577f760610f6a67d4c1a4aeb8c74c9b1b8206df763a9c6 (image=quay.io/ceph/ceph:v20, name=magical_taussig, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  3 16:09:24 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:09:24 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:24 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:09:24 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:24 np0005544708 systemd[1]: var-lib-containers-storage-overlay-873fe2228dc62cfe9aed1332bdfed7a339a959c1d31123a871a0ee7e7081789a-merged.mount: Deactivated successfully.
Dec  3 16:09:24 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec  3 16:09:24 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec  3 16:09:24 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec  3 16:09:24 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec  3 16:09:24 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Dec  3 16:09:24 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec  3 16:09:24 np0005544708 ceph-mgr[75500]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43689k
Dec  3 16:09:24 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43689k
Dec  3 16:09:24 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec  3 16:09:24 np0005544708 ceph-mgr[75500]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44738286: error parsing value: Value '44738286' is below minimum 939524096
Dec  3 16:09:24 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44738286: error parsing value: Value '44738286' is below minimum 939524096
Dec  3 16:09:24 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:09:24 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:09:24 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  3 16:09:24 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:09:24 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  3 16:09:24 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:24 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  3 16:09:24 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  3 16:09:24 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  3 16:09:24 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:09:24 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:09:24 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:09:24 np0005544708 podman[89226]: 2025-12-03 21:09:24.262614996 +0000 UTC m=+0.824313647 container remove 19a9774ed7730953ef577f760610f6a67d4c1a4aeb8c74c9b1b8206df763a9c6 (image=quay.io/ceph/ceph:v20, name=magical_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:09:24 np0005544708 systemd[1]: libpod-conmon-19a9774ed7730953ef577f760610f6a67d4c1a4aeb8c74c9b1b8206df763a9c6.scope: Deactivated successfully.
Dec  3 16:09:24 np0005544708 ceph-osd[88129]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 28.496 iops: 7294.884 elapsed_sec: 0.411
Dec  3 16:09:24 np0005544708 ceph-osd[88129]: log_channel(cluster) log [WRN] : OSD bench result of 7294.884357 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  3 16:09:24 np0005544708 ceph-osd[88129]: osd.2 0 waiting for initial osdmap
Dec  3 16:09:24 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-2[88125]: 2025-12-03T21:09:24.461+0000 7f53bc55c640 -1 osd.2 0 waiting for initial osdmap
Dec  3 16:09:24 np0005544708 ceph-osd[88129]: osd.2 14 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec  3 16:09:24 np0005544708 ceph-osd[88129]: osd.2 14 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Dec  3 16:09:24 np0005544708 ceph-osd[88129]: osd.2 14 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec  3 16:09:24 np0005544708 ceph-osd[88129]: osd.2 14 check_osdmap_features require_osd_release unknown -> tentacle
Dec  3 16:09:24 np0005544708 ceph-osd[88129]: osd.2 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  3 16:09:24 np0005544708 ceph-osd[88129]: osd.2 14 set_numa_affinity not setting numa affinity
Dec  3 16:09:24 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-osd-2[88125]: 2025-12-03T21:09:24.490+0000 7f53b6b4f640 -1 osd.2 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  3 16:09:24 np0005544708 ceph-osd[88129]: osd.2 14 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial no unique device path for loop5: no symlink to loop5 in /dev/disk/by-path
Dec  3 16:09:24 np0005544708 podman[90165]: 2025-12-03 21:09:24.700829602 +0000 UTC m=+0.067084883 container create ea60f694b1e6af445a8ce81aa2e95dd92618a88d8843915bf3f53c68981525d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:09:24 np0005544708 python3[90152]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid c21de27e-a7fd-594b-8324-0697ba9aab3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:09:24 np0005544708 systemd[1]: Started libpod-conmon-ea60f694b1e6af445a8ce81aa2e95dd92618a88d8843915bf3f53c68981525d5.scope.
Dec  3 16:09:24 np0005544708 podman[90165]: 2025-12-03 21:09:24.673811589 +0000 UTC m=+0.040066980 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:09:24 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:24 np0005544708 podman[90179]: 2025-12-03 21:09:24.777506708 +0000 UTC m=+0.048910350 container create fcc47cdc07c0382dc0656ae82187f6c347773c621910eda474f72d3742d8eaf6 (image=quay.io/ceph/ceph:v20, name=suspicious_wing, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  3 16:09:24 np0005544708 podman[90165]: 2025-12-03 21:09:24.796269962 +0000 UTC m=+0.162525313 container init ea60f694b1e6af445a8ce81aa2e95dd92618a88d8843915bf3f53c68981525d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_bartik, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec  3 16:09:24 np0005544708 podman[90165]: 2025-12-03 21:09:24.804640453 +0000 UTC m=+0.170895734 container start ea60f694b1e6af445a8ce81aa2e95dd92618a88d8843915bf3f53c68981525d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec  3 16:09:24 np0005544708 podman[90165]: 2025-12-03 21:09:24.808392219 +0000 UTC m=+0.174647540 container attach ea60f694b1e6af445a8ce81aa2e95dd92618a88d8843915bf3f53c68981525d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_bartik, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:09:24 np0005544708 interesting_bartik[90192]: 167 167
Dec  3 16:09:24 np0005544708 podman[90165]: 2025-12-03 21:09:24.811682376 +0000 UTC m=+0.177937717 container died ea60f694b1e6af445a8ce81aa2e95dd92618a88d8843915bf3f53c68981525d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_bartik, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  3 16:09:24 np0005544708 systemd[1]: Started libpod-conmon-fcc47cdc07c0382dc0656ae82187f6c347773c621910eda474f72d3742d8eaf6.scope.
Dec  3 16:09:24 np0005544708 systemd[1]: libpod-ea60f694b1e6af445a8ce81aa2e95dd92618a88d8843915bf3f53c68981525d5.scope: Deactivated successfully.
Dec  3 16:09:24 np0005544708 ceph-mgr[75500]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/563643625; not ready for session (expect reconnect)
Dec  3 16:09:24 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  3 16:09:24 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec  3 16:09:24 np0005544708 ceph-mgr[75500]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  3 16:09:24 np0005544708 podman[90179]: 2025-12-03 21:09:24.751974896 +0000 UTC m=+0.023378588 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:09:24 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:24 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8652d559628f04af2c3ee4f098ff5f5f58aeee442b79793aeb1ff92c5bd81de/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:24 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8652d559628f04af2c3ee4f098ff5f5f58aeee442b79793aeb1ff92c5bd81de/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:24 np0005544708 systemd[1]: var-lib-containers-storage-overlay-a94b14fe4b1b8d5ce6484caa2465d6e7bb404a5acaf57f9bd6770f6093e60e9e-merged.mount: Deactivated successfully.
Dec  3 16:09:24 np0005544708 podman[90179]: 2025-12-03 21:09:24.88035703 +0000 UTC m=+0.151760692 container init fcc47cdc07c0382dc0656ae82187f6c347773c621910eda474f72d3742d8eaf6 (image=quay.io/ceph/ceph:v20, name=suspicious_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:09:24 np0005544708 podman[90165]: 2025-12-03 21:09:24.889196551 +0000 UTC m=+0.255451872 container remove ea60f694b1e6af445a8ce81aa2e95dd92618a88d8843915bf3f53c68981525d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_bartik, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec  3 16:09:24 np0005544708 podman[90179]: 2025-12-03 21:09:24.891062859 +0000 UTC m=+0.162466471 container start fcc47cdc07c0382dc0656ae82187f6c347773c621910eda474f72d3742d8eaf6 (image=quay.io/ceph/ceph:v20, name=suspicious_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 16:09:24 np0005544708 podman[90179]: 2025-12-03 21:09:24.8945528 +0000 UTC m=+0.165956732 container attach fcc47cdc07c0382dc0656ae82187f6c347773c621910eda474f72d3742d8eaf6 (image=quay.io/ceph/ceph:v20, name=suspicious_wing, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:09:24 np0005544708 systemd[1]: libpod-conmon-ea60f694b1e6af445a8ce81aa2e95dd92618a88d8843915bf3f53c68981525d5.scope: Deactivated successfully.
Dec  3 16:09:24 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:24 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:24 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec  3 16:09:24 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec  3 16:09:24 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec  3 16:09:24 np0005544708 ceph-mon[75204]: Adjusting osd_memory_target on compute-0 to 43689k
Dec  3 16:09:24 np0005544708 ceph-mon[75204]: Unable to set osd_memory_target on compute-0 to 44738286: error parsing value: Value '44738286' is below minimum 939524096
Dec  3 16:09:24 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:09:24 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:24 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:09:25 np0005544708 podman[90224]: 2025-12-03 21:09:25.056685784 +0000 UTC m=+0.048711157 container create d3bd1e8370eeaeb7f7b50dfe6a29532dda7bc1ae03e1ade06a46cfdc30ba689e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_noether, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 16:09:25 np0005544708 systemd[1]: Started libpod-conmon-d3bd1e8370eeaeb7f7b50dfe6a29532dda7bc1ae03e1ade06a46cfdc30ba689e.scope.
Dec  3 16:09:25 np0005544708 podman[90224]: 2025-12-03 21:09:25.034018551 +0000 UTC m=+0.026043904 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:09:25 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v37: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec  3 16:09:25 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:25 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f28e964939078fd006a1453b06ba24f189fa6bcf6b2420c3f514e16304d8f79/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:25 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f28e964939078fd006a1453b06ba24f189fa6bcf6b2420c3f514e16304d8f79/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:25 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f28e964939078fd006a1453b06ba24f189fa6bcf6b2420c3f514e16304d8f79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:25 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f28e964939078fd006a1453b06ba24f189fa6bcf6b2420c3f514e16304d8f79/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:25 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f28e964939078fd006a1453b06ba24f189fa6bcf6b2420c3f514e16304d8f79/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:25 np0005544708 podman[90224]: 2025-12-03 21:09:25.166252313 +0000 UTC m=+0.158277686 container init d3bd1e8370eeaeb7f7b50dfe6a29532dda7bc1ae03e1ade06a46cfdc30ba689e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_noether, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:09:25 np0005544708 podman[90224]: 2025-12-03 21:09:25.175059573 +0000 UTC m=+0.167084906 container start d3bd1e8370eeaeb7f7b50dfe6a29532dda7bc1ae03e1ade06a46cfdc30ba689e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_noether, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 16:09:25 np0005544708 podman[90224]: 2025-12-03 21:09:25.179003154 +0000 UTC m=+0.171028547 container attach d3bd1e8370eeaeb7f7b50dfe6a29532dda7bc1ae03e1ade06a46cfdc30ba689e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:09:25 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Dec  3 16:09:25 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e15 e15: 3 total, 3 up, 3 in
Dec  3 16:09:25 np0005544708 ceph-mon[75204]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/563643625,v1:192.168.122.100:6811/563643625] boot
Dec  3 16:09:25 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 3 up, 3 in
Dec  3 16:09:25 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  3 16:09:25 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec  3 16:09:25 np0005544708 ceph-osd[88129]: osd.2 15 state: booting -> active
Dec  3 16:09:25 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  3 16:09:25 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3946086394' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec  3 16:09:25 np0005544708 sweet_noether[90259]: --> passed data devices: 0 physical, 3 LVM
Dec  3 16:09:25 np0005544708 sweet_noether[90259]: --> All data devices are unavailable
Dec  3 16:09:25 np0005544708 systemd[1]: libpod-d3bd1e8370eeaeb7f7b50dfe6a29532dda7bc1ae03e1ade06a46cfdc30ba689e.scope: Deactivated successfully.
Dec  3 16:09:25 np0005544708 podman[90224]: 2025-12-03 21:09:25.813050971 +0000 UTC m=+0.805076344 container died d3bd1e8370eeaeb7f7b50dfe6a29532dda7bc1ae03e1ade06a46cfdc30ba689e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:09:25 np0005544708 systemd[1]: var-lib-containers-storage-overlay-5f28e964939078fd006a1453b06ba24f189fa6bcf6b2420c3f514e16304d8f79-merged.mount: Deactivated successfully.
Dec  3 16:09:25 np0005544708 podman[90224]: 2025-12-03 21:09:25.869968164 +0000 UTC m=+0.861993537 container remove d3bd1e8370eeaeb7f7b50dfe6a29532dda7bc1ae03e1ade06a46cfdc30ba689e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_noether, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:09:25 np0005544708 systemd[1]: libpod-conmon-d3bd1e8370eeaeb7f7b50dfe6a29532dda7bc1ae03e1ade06a46cfdc30ba689e.scope: Deactivated successfully.
Dec  3 16:09:25 np0005544708 ceph-mon[75204]: OSD bench result of 7294.884357 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  3 16:09:25 np0005544708 ceph-mon[75204]: osd.2 [v2:192.168.122.100:6810/563643625,v1:192.168.122.100:6811/563643625] boot
Dec  3 16:09:25 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/3946086394' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec  3 16:09:26 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Dec  3 16:09:26 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3946086394' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  3 16:09:26 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e16 e16: 3 total, 3 up, 3 in
Dec  3 16:09:26 np0005544708 suspicious_wing[90203]: pool 'vms' created
Dec  3 16:09:26 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 3 up, 3 in
Dec  3 16:09:26 np0005544708 systemd[1]: libpod-fcc47cdc07c0382dc0656ae82187f6c347773c621910eda474f72d3742d8eaf6.scope: Deactivated successfully.
Dec  3 16:09:26 np0005544708 podman[90179]: 2025-12-03 21:09:26.298875709 +0000 UTC m=+1.570279351 container died fcc47cdc07c0382dc0656ae82187f6c347773c621910eda474f72d3742d8eaf6 (image=quay.io/ceph/ceph:v20, name=suspicious_wing, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:09:26 np0005544708 systemd[1]: var-lib-containers-storage-overlay-d8652d559628f04af2c3ee4f098ff5f5f58aeee442b79793aeb1ff92c5bd81de-merged.mount: Deactivated successfully.
Dec  3 16:09:26 np0005544708 podman[90179]: 2025-12-03 21:09:26.364338957 +0000 UTC m=+1.635742579 container remove fcc47cdc07c0382dc0656ae82187f6c347773c621910eda474f72d3742d8eaf6 (image=quay.io/ceph/ceph:v20, name=suspicious_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:09:26 np0005544708 systemd[1]: libpod-conmon-fcc47cdc07c0382dc0656ae82187f6c347773c621910eda474f72d3742d8eaf6.scope: Deactivated successfully.
Dec  3 16:09:26 np0005544708 podman[90366]: 2025-12-03 21:09:26.455860377 +0000 UTC m=+0.051390391 container create d4daf438336c624ac52d81b4d91e7d71c9b29a356bd90059b7ed918586dbba06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_wescoff, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 16:09:26 np0005544708 systemd[1]: Started libpod-conmon-d4daf438336c624ac52d81b4d91e7d71c9b29a356bd90059b7ed918586dbba06.scope.
Dec  3 16:09:26 np0005544708 podman[90366]: 2025-12-03 21:09:26.428465058 +0000 UTC m=+0.023995082 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:09:26 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:26 np0005544708 podman[90366]: 2025-12-03 21:09:26.553532463 +0000 UTC m=+0.149062527 container init d4daf438336c624ac52d81b4d91e7d71c9b29a356bd90059b7ed918586dbba06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_wescoff, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 16:09:26 np0005544708 podman[90366]: 2025-12-03 21:09:26.562713491 +0000 UTC m=+0.158243505 container start d4daf438336c624ac52d81b4d91e7d71c9b29a356bd90059b7ed918586dbba06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_wescoff, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:09:26 np0005544708 podman[90366]: 2025-12-03 21:09:26.566697663 +0000 UTC m=+0.162227677 container attach d4daf438336c624ac52d81b4d91e7d71c9b29a356bd90059b7ed918586dbba06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_wescoff, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:09:26 np0005544708 elegant_wescoff[90402]: 167 167
Dec  3 16:09:26 np0005544708 systemd[1]: libpod-d4daf438336c624ac52d81b4d91e7d71c9b29a356bd90059b7ed918586dbba06.scope: Deactivated successfully.
Dec  3 16:09:26 np0005544708 podman[90366]: 2025-12-03 21:09:26.568941349 +0000 UTC m=+0.164471363 container died d4daf438336c624ac52d81b4d91e7d71c9b29a356bd90059b7ed918586dbba06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec  3 16:09:26 np0005544708 systemd[1]: var-lib-containers-storage-overlay-ddeb74f34573c270bee8ae1e30328389ccf50e4cc10a8f0d01f859cebc45604a-merged.mount: Deactivated successfully.
Dec  3 16:09:26 np0005544708 podman[90366]: 2025-12-03 21:09:26.628083077 +0000 UTC m=+0.223613061 container remove d4daf438336c624ac52d81b4d91e7d71c9b29a356bd90059b7ed918586dbba06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_wescoff, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:09:26 np0005544708 systemd[1]: libpod-conmon-d4daf438336c624ac52d81b4d91e7d71c9b29a356bd90059b7ed918586dbba06.scope: Deactivated successfully.
Dec  3 16:09:26 np0005544708 python3[90412]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid c21de27e-a7fd-594b-8324-0697ba9aab3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:09:26 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:09:26 np0005544708 podman[90428]: 2025-12-03 21:09:26.772261583 +0000 UTC m=+0.064116691 container create 4b116689e25850e8d0981a017b8567f51fc4f4bb04e08a83ce4a1710fca6892e (image=quay.io/ceph/ceph:v20, name=elegant_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 16:09:26 np0005544708 systemd[1]: Started libpod-conmon-4b116689e25850e8d0981a017b8567f51fc4f4bb04e08a83ce4a1710fca6892e.scope.
Dec  3 16:09:26 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:26 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79be4df1520491daa333f3fee306ec0da24416a052e44b909d4d6b99aa62cce8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:26 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79be4df1520491daa333f3fee306ec0da24416a052e44b909d4d6b99aa62cce8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:26 np0005544708 podman[90445]: 2025-12-03 21:09:26.828996113 +0000 UTC m=+0.048528543 container create 99d9d726353097ca0ac7c9d1949672f521a9f924181374ae8c4f3e8ecc6db0cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_payne, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec  3 16:09:26 np0005544708 podman[90428]: 2025-12-03 21:09:26.842950449 +0000 UTC m=+0.134805667 container init 4b116689e25850e8d0981a017b8567f51fc4f4bb04e08a83ce4a1710fca6892e (image=quay.io/ceph/ceph:v20, name=elegant_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  3 16:09:26 np0005544708 podman[90428]: 2025-12-03 21:09:26.75255547 +0000 UTC m=+0.044410598 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:09:26 np0005544708 podman[90428]: 2025-12-03 21:09:26.856473674 +0000 UTC m=+0.148328782 container start 4b116689e25850e8d0981a017b8567f51fc4f4bb04e08a83ce4a1710fca6892e (image=quay.io/ceph/ceph:v20, name=elegant_goldstine, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec  3 16:09:26 np0005544708 podman[90428]: 2025-12-03 21:09:26.860192761 +0000 UTC m=+0.152047909 container attach 4b116689e25850e8d0981a017b8567f51fc4f4bb04e08a83ce4a1710fca6892e (image=quay.io/ceph/ceph:v20, name=elegant_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:09:26 np0005544708 systemd[1]: Started libpod-conmon-99d9d726353097ca0ac7c9d1949672f521a9f924181374ae8c4f3e8ecc6db0cd.scope.
Dec  3 16:09:26 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:26 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf70c8d3d73d7ccb030155250a930e6cab91941bb14cab2858c287ee05a7c63d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:26 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf70c8d3d73d7ccb030155250a930e6cab91941bb14cab2858c287ee05a7c63d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:26 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf70c8d3d73d7ccb030155250a930e6cab91941bb14cab2858c287ee05a7c63d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:26 np0005544708 podman[90445]: 2025-12-03 21:09:26.80782769 +0000 UTC m=+0.027360170 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:09:26 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf70c8d3d73d7ccb030155250a930e6cab91941bb14cab2858c287ee05a7c63d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:26 np0005544708 podman[90445]: 2025-12-03 21:09:26.910978418 +0000 UTC m=+0.130510928 container init 99d9d726353097ca0ac7c9d1949672f521a9f924181374ae8c4f3e8ecc6db0cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_payne, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec  3 16:09:26 np0005544708 podman[90445]: 2025-12-03 21:09:26.916662145 +0000 UTC m=+0.136194615 container start 99d9d726353097ca0ac7c9d1949672f521a9f924181374ae8c4f3e8ecc6db0cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_payne, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True)
Dec  3 16:09:26 np0005544708 podman[90445]: 2025-12-03 21:09:26.92036952 +0000 UTC m=+0.139901990 container attach 99d9d726353097ca0ac7c9d1949672f521a9f924181374ae8c4f3e8ecc6db0cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_payne, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:09:27 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 16 pg[2.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [2] r=0 lpr=16 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:09:27 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v40: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]: {
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:    "0": [
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:        {
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:            "devices": [
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "/dev/loop3"
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:            ],
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:            "lv_name": "ceph_lv0",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:            "lv_size": "21470642176",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d33bc95-baf8-481d-bc78-3b15ffd29872,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:            "lv_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:            "name": "ceph_lv0",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:            "tags": {
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.block_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.cluster_name": "ceph",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.crush_device_class": "",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.encrypted": "0",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.objectstore": "bluestore",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.osd_fsid": "4d33bc95-baf8-481d-bc78-3b15ffd29872",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.osd_id": "0",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.type": "block",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.vdo": "0",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.with_tpm": "0"
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:            },
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:            "type": "block",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:            "vg_name": "ceph_vg0"
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:        }
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:    ],
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:    "1": [
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:        {
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:            "devices": [
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "/dev/loop4"
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:            ],
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:            "lv_name": "ceph_lv1",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:            "lv_size": "21470642176",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c4086f1b-ff53-4e63-8dc0-011238d77976,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:            "lv_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:            "name": "ceph_lv1",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:            "tags": {
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.block_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.cluster_name": "ceph",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.crush_device_class": "",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.encrypted": "0",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.objectstore": "bluestore",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.osd_fsid": "c4086f1b-ff53-4e63-8dc0-011238d77976",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.osd_id": "1",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.type": "block",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.vdo": "0",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.with_tpm": "0"
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:            },
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:            "type": "block",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:            "vg_name": "ceph_vg1"
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:        }
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:    ],
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:    "2": [
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:        {
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:            "devices": [
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "/dev/loop5"
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:            ],
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:            "lv_name": "ceph_lv2",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:            "lv_size": "21470642176",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=abcd6a67-9013-4470-978f-f75da5f33cd4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:            "lv_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:            "name": "ceph_lv2",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:            "tags": {
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.block_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.cluster_name": "ceph",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.crush_device_class": "",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.encrypted": "0",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.objectstore": "bluestore",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.osd_fsid": "abcd6a67-9013-4470-978f-f75da5f33cd4",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.osd_id": "2",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.type": "block",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.vdo": "0",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:                "ceph.with_tpm": "0"
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:            },
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:            "type": "block",
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:            "vg_name": "ceph_vg2"
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:        }
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]:    ]
Dec  3 16:09:27 np0005544708 beautiful_payne[90469]: }
Dec  3 16:09:27 np0005544708 systemd[1]: libpod-99d9d726353097ca0ac7c9d1949672f521a9f924181374ae8c4f3e8ecc6db0cd.scope: Deactivated successfully.
Dec  3 16:09:27 np0005544708 podman[90445]: 2025-12-03 21:09:27.230663751 +0000 UTC m=+0.450196191 container died 99d9d726353097ca0ac7c9d1949672f521a9f924181374ae8c4f3e8ecc6db0cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec  3 16:09:27 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  3 16:09:27 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/662270332' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec  3 16:09:27 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Dec  3 16:09:27 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/3946086394' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  3 16:09:27 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/662270332' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec  3 16:09:27 np0005544708 podman[90445]: 2025-12-03 21:09:27.274739803 +0000 UTC m=+0.494272243 container remove 99d9d726353097ca0ac7c9d1949672f521a9f924181374ae8c4f3e8ecc6db0cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec  3 16:09:27 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/662270332' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  3 16:09:27 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Dec  3 16:09:27 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Dec  3 16:09:27 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 17 pg[2.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [2] r=0 lpr=16 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:09:27 np0005544708 elegant_goldstine[90461]: pool 'volumes' created
Dec  3 16:09:27 np0005544708 systemd[1]: libpod-conmon-99d9d726353097ca0ac7c9d1949672f521a9f924181374ae8c4f3e8ecc6db0cd.scope: Deactivated successfully.
Dec  3 16:09:27 np0005544708 systemd[1]: libpod-4b116689e25850e8d0981a017b8567f51fc4f4bb04e08a83ce4a1710fca6892e.scope: Deactivated successfully.
Dec  3 16:09:27 np0005544708 podman[90428]: 2025-12-03 21:09:27.315297891 +0000 UTC m=+0.607153049 container died 4b116689e25850e8d0981a017b8567f51fc4f4bb04e08a83ce4a1710fca6892e (image=quay.io/ceph/ceph:v20, name=elegant_goldstine, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:09:27 np0005544708 systemd[1]: var-lib-containers-storage-overlay-cf70c8d3d73d7ccb030155250a930e6cab91941bb14cab2858c287ee05a7c63d-merged.mount: Deactivated successfully.
Dec  3 16:09:27 np0005544708 systemd[1]: var-lib-containers-storage-overlay-79be4df1520491daa333f3fee306ec0da24416a052e44b909d4d6b99aa62cce8-merged.mount: Deactivated successfully.
Dec  3 16:09:27 np0005544708 podman[90428]: 2025-12-03 21:09:27.359715639 +0000 UTC m=+0.651570747 container remove 4b116689e25850e8d0981a017b8567f51fc4f4bb04e08a83ce4a1710fca6892e (image=quay.io/ceph/ceph:v20, name=elegant_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:09:27 np0005544708 systemd[1]: libpod-conmon-4b116689e25850e8d0981a017b8567f51fc4f4bb04e08a83ce4a1710fca6892e.scope: Deactivated successfully.
Dec  3 16:09:27 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 17 pg[3.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [1] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:09:27 np0005544708 python3[90599]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid c21de27e-a7fd-594b-8324-0697ba9aab3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:09:27 np0005544708 podman[90600]: 2025-12-03 21:09:27.700447292 +0000 UTC m=+0.065144333 container create f41afada032f997a588d4b2a28606912598ba00676edfa41f98bc965324e31d8 (image=quay.io/ceph/ceph:v20, name=unruffled_leakey, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  3 16:09:27 np0005544708 podman[90623]: 2025-12-03 21:09:27.727993965 +0000 UTC m=+0.041015809 container create 10c9eadb775eb7e7053f537857c90485acc5c4f35d186faf25f64f8e0d1c2981 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_blackburn, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  3 16:09:27 np0005544708 systemd[1]: Started libpod-conmon-f41afada032f997a588d4b2a28606912598ba00676edfa41f98bc965324e31d8.scope.
Dec  3 16:09:27 np0005544708 systemd[1]: Started libpod-conmon-10c9eadb775eb7e7053f537857c90485acc5c4f35d186faf25f64f8e0d1c2981.scope.
Dec  3 16:09:27 np0005544708 podman[90600]: 2025-12-03 21:09:27.677414801 +0000 UTC m=+0.042111952 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:09:27 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:27 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22cd3eb5bf7e9ac48539436650c1b50e44f6bc5d82ca06b16434423ffb956412/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:27 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22cd3eb5bf7e9ac48539436650c1b50e44f6bc5d82ca06b16434423ffb956412/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:27 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:27 np0005544708 podman[90600]: 2025-12-03 21:09:27.790760248 +0000 UTC m=+0.155457319 container init f41afada032f997a588d4b2a28606912598ba00676edfa41f98bc965324e31d8 (image=quay.io/ceph/ceph:v20, name=unruffled_leakey, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:09:27 np0005544708 podman[90600]: 2025-12-03 21:09:27.79724013 +0000 UTC m=+0.161937181 container start f41afada032f997a588d4b2a28606912598ba00676edfa41f98bc965324e31d8 (image=quay.io/ceph/ceph:v20, name=unruffled_leakey, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:09:27 np0005544708 podman[90623]: 2025-12-03 21:09:27.801097139 +0000 UTC m=+0.114118983 container init 10c9eadb775eb7e7053f537857c90485acc5c4f35d186faf25f64f8e0d1c2981 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_blackburn, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec  3 16:09:27 np0005544708 podman[90623]: 2025-12-03 21:09:27.806028519 +0000 UTC m=+0.119050363 container start 10c9eadb775eb7e7053f537857c90485acc5c4f35d186faf25f64f8e0d1c2981 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_blackburn, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec  3 16:09:27 np0005544708 awesome_blackburn[90645]: 167 167
Dec  3 16:09:27 np0005544708 podman[90600]: 2025-12-03 21:09:27.809072652 +0000 UTC m=+0.173769703 container attach f41afada032f997a588d4b2a28606912598ba00676edfa41f98bc965324e31d8 (image=quay.io/ceph/ceph:v20, name=unruffled_leakey, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  3 16:09:27 np0005544708 systemd[1]: libpod-10c9eadb775eb7e7053f537857c90485acc5c4f35d186faf25f64f8e0d1c2981.scope: Deactivated successfully.
Dec  3 16:09:27 np0005544708 podman[90623]: 2025-12-03 21:09:27.712994218 +0000 UTC m=+0.026016082 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:09:27 np0005544708 podman[90623]: 2025-12-03 21:09:27.812851289 +0000 UTC m=+0.125873133 container attach 10c9eadb775eb7e7053f537857c90485acc5c4f35d186faf25f64f8e0d1c2981 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_blackburn, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:09:27 np0005544708 podman[90623]: 2025-12-03 21:09:27.813664886 +0000 UTC m=+0.126686730 container died 10c9eadb775eb7e7053f537857c90485acc5c4f35d186faf25f64f8e0d1c2981 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_blackburn, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 16:09:27 np0005544708 systemd[1]: var-lib-containers-storage-overlay-f4e9c0d8a7c7d63c57b6c876fe77a7e9ca51f0dee9ed00ae4fde1664a6b2c3a7-merged.mount: Deactivated successfully.
Dec  3 16:09:27 np0005544708 podman[90623]: 2025-12-03 21:09:27.847755602 +0000 UTC m=+0.160777446 container remove 10c9eadb775eb7e7053f537857c90485acc5c4f35d186faf25f64f8e0d1c2981 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_blackburn, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec  3 16:09:27 np0005544708 systemd[1]: libpod-conmon-10c9eadb775eb7e7053f537857c90485acc5c4f35d186faf25f64f8e0d1c2981.scope: Deactivated successfully.
Dec  3 16:09:28 np0005544708 podman[90689]: 2025-12-03 21:09:28.012762624 +0000 UTC m=+0.048285837 container create 6158c442b5cebbb00b48d256c858cd046b903f8130b7998abb3fa5cb97d40f3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bouman, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  3 16:09:28 np0005544708 systemd[1]: Started libpod-conmon-6158c442b5cebbb00b48d256c858cd046b903f8130b7998abb3fa5cb97d40f3a.scope.
Dec  3 16:09:28 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:28 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89367f72cd2e5095a360eac7c3f14bdfa26d0eb7f63501e94ee9304234f54ea5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:28 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89367f72cd2e5095a360eac7c3f14bdfa26d0eb7f63501e94ee9304234f54ea5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:28 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89367f72cd2e5095a360eac7c3f14bdfa26d0eb7f63501e94ee9304234f54ea5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:28 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89367f72cd2e5095a360eac7c3f14bdfa26d0eb7f63501e94ee9304234f54ea5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:28 np0005544708 podman[90689]: 2025-12-03 21:09:27.989453038 +0000 UTC m=+0.024976281 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:09:28 np0005544708 podman[90689]: 2025-12-03 21:09:28.092676528 +0000 UTC m=+0.128199721 container init 6158c442b5cebbb00b48d256c858cd046b903f8130b7998abb3fa5cb97d40f3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bouman, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:09:28 np0005544708 podman[90689]: 2025-12-03 21:09:28.10065126 +0000 UTC m=+0.136174463 container start 6158c442b5cebbb00b48d256c858cd046b903f8130b7998abb3fa5cb97d40f3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bouman, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:09:28 np0005544708 podman[90689]: 2025-12-03 21:09:28.104478179 +0000 UTC m=+0.140001392 container attach 6158c442b5cebbb00b48d256c858cd046b903f8130b7998abb3fa5cb97d40f3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  3 16:09:28 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  3 16:09:28 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3215483712' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec  3 16:09:28 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Dec  3 16:09:28 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3215483712' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  3 16:09:28 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Dec  3 16:09:28 np0005544708 unruffled_leakey[90638]: pool 'backups' created
Dec  3 16:09:28 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Dec  3 16:09:28 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 18 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [1] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:09:28 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/662270332' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  3 16:09:28 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/3215483712' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec  3 16:09:28 np0005544708 systemd[1]: libpod-f41afada032f997a588d4b2a28606912598ba00676edfa41f98bc965324e31d8.scope: Deactivated successfully.
Dec  3 16:09:28 np0005544708 podman[90600]: 2025-12-03 21:09:28.325404254 +0000 UTC m=+0.690101415 container died f41afada032f997a588d4b2a28606912598ba00676edfa41f98bc965324e31d8 (image=quay.io/ceph/ceph:v20, name=unruffled_leakey, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  3 16:09:28 np0005544708 systemd[1]: var-lib-containers-storage-overlay-22cd3eb5bf7e9ac48539436650c1b50e44f6bc5d82ca06b16434423ffb956412-merged.mount: Deactivated successfully.
Dec  3 16:09:28 np0005544708 podman[90600]: 2025-12-03 21:09:28.39077957 +0000 UTC m=+0.755476631 container remove f41afada032f997a588d4b2a28606912598ba00676edfa41f98bc965324e31d8 (image=quay.io/ceph/ceph:v20, name=unruffled_leakey, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec  3 16:09:28 np0005544708 systemd[1]: libpod-conmon-f41afada032f997a588d4b2a28606912598ba00676edfa41f98bc965324e31d8.scope: Deactivated successfully.
Dec  3 16:09:28 np0005544708 python3[90780]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid c21de27e-a7fd-594b-8324-0697ba9aab3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:09:28 np0005544708 podman[90814]: 2025-12-03 21:09:28.721166422 +0000 UTC m=+0.039771464 container create e81e3efb032b8192d7c9aabba1ea0d68ed6c0d5226cdbdd28fadc2251fd302df (image=quay.io/ceph/ceph:v20, name=optimistic_mahavira, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 16:09:28 np0005544708 systemd[1]: Started libpod-conmon-e81e3efb032b8192d7c9aabba1ea0d68ed6c0d5226cdbdd28fadc2251fd302df.scope.
Dec  3 16:09:28 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:28 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ccb1907b3900174711e58c32db0aa519e62affc5bab111b8421bb7b911931a5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:28 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ccb1907b3900174711e58c32db0aa519e62affc5bab111b8421bb7b911931a5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:28 np0005544708 podman[90814]: 2025-12-03 21:09:28.703390459 +0000 UTC m=+0.021995531 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:09:28 np0005544708 podman[90814]: 2025-12-03 21:09:28.805080757 +0000 UTC m=+0.123685849 container init e81e3efb032b8192d7c9aabba1ea0d68ed6c0d5226cdbdd28fadc2251fd302df (image=quay.io/ceph/ceph:v20, name=optimistic_mahavira, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec  3 16:09:28 np0005544708 podman[90814]: 2025-12-03 21:09:28.8175308 +0000 UTC m=+0.136135852 container start e81e3efb032b8192d7c9aabba1ea0d68ed6c0d5226cdbdd28fadc2251fd302df (image=quay.io/ceph/ceph:v20, name=optimistic_mahavira, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  3 16:09:28 np0005544708 podman[90814]: 2025-12-03 21:09:28.821988422 +0000 UTC m=+0.140593474 container attach e81e3efb032b8192d7c9aabba1ea0d68ed6c0d5226cdbdd28fadc2251fd302df (image=quay.io/ceph/ceph:v20, name=optimistic_mahavira, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 16:09:28 np0005544708 lvm[90846]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:09:28 np0005544708 lvm[90846]: VG ceph_vg0 finished
Dec  3 16:09:28 np0005544708 lvm[90848]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:09:28 np0005544708 lvm[90848]: VG ceph_vg1 finished
Dec  3 16:09:28 np0005544708 lvm[90850]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:09:28 np0005544708 lvm[90850]: VG ceph_vg2 finished
Dec  3 16:09:28 np0005544708 relaxed_bouman[90706]: {}
Dec  3 16:09:28 np0005544708 systemd[1]: libpod-6158c442b5cebbb00b48d256c858cd046b903f8130b7998abb3fa5cb97d40f3a.scope: Deactivated successfully.
Dec  3 16:09:28 np0005544708 podman[90689]: 2025-12-03 21:09:28.969809993 +0000 UTC m=+1.005333186 container died 6158c442b5cebbb00b48d256c858cd046b903f8130b7998abb3fa5cb97d40f3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bouman, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:09:28 np0005544708 systemd[1]: libpod-6158c442b5cebbb00b48d256c858cd046b903f8130b7998abb3fa5cb97d40f3a.scope: Consumed 1.376s CPU time.
Dec  3 16:09:28 np0005544708 systemd[1]: var-lib-containers-storage-overlay-89367f72cd2e5095a360eac7c3f14bdfa26d0eb7f63501e94ee9304234f54ea5-merged.mount: Deactivated successfully.
Dec  3 16:09:29 np0005544708 podman[90689]: 2025-12-03 21:09:29.008529425 +0000 UTC m=+1.044052618 container remove 6158c442b5cebbb00b48d256c858cd046b903f8130b7998abb3fa5cb97d40f3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_bouman, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:09:29 np0005544708 systemd[1]: libpod-conmon-6158c442b5cebbb00b48d256c858cd046b903f8130b7998abb3fa5cb97d40f3a.scope: Deactivated successfully.
Dec  3 16:09:29 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:09:29 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:29 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:09:29 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:29 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 18 pg[4.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [0] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:09:29 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v43: 4 pgs: 3 unknown, 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:09:29 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  3 16:09:29 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3503672702' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec  3 16:09:29 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Dec  3 16:09:29 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3503672702' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  3 16:09:29 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Dec  3 16:09:29 np0005544708 optimistic_mahavira[90839]: pool 'images' created
Dec  3 16:09:29 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Dec  3 16:09:29 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 19 pg[5.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:09:29 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/3215483712' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  3 16:09:29 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:29 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:29 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/3503672702' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec  3 16:09:29 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/3503672702' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  3 16:09:29 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 19 pg[4.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [0] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:09:29 np0005544708 systemd[1]: libpod-e81e3efb032b8192d7c9aabba1ea0d68ed6c0d5226cdbdd28fadc2251fd302df.scope: Deactivated successfully.
Dec  3 16:09:29 np0005544708 podman[90814]: 2025-12-03 21:09:29.320047961 +0000 UTC m=+0.638653023 container died e81e3efb032b8192d7c9aabba1ea0d68ed6c0d5226cdbdd28fadc2251fd302df (image=quay.io/ceph/ceph:v20, name=optimistic_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec  3 16:09:29 np0005544708 systemd[1]: var-lib-containers-storage-overlay-1ccb1907b3900174711e58c32db0aa519e62affc5bab111b8421bb7b911931a5-merged.mount: Deactivated successfully.
Dec  3 16:09:29 np0005544708 podman[90814]: 2025-12-03 21:09:29.367337337 +0000 UTC m=+0.685942399 container remove e81e3efb032b8192d7c9aabba1ea0d68ed6c0d5226cdbdd28fadc2251fd302df (image=quay.io/ceph/ceph:v20, name=optimistic_mahavira, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:09:29 np0005544708 systemd[1]: libpod-conmon-e81e3efb032b8192d7c9aabba1ea0d68ed6c0d5226cdbdd28fadc2251fd302df.scope: Deactivated successfully.
Dec  3 16:09:29 np0005544708 python3[90948]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid c21de27e-a7fd-594b-8324-0697ba9aab3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:09:29 np0005544708 podman[90949]: 2025-12-03 21:09:29.73985133 +0000 UTC m=+0.064766715 container create f45dae8ae11895c9efcae88b28783b9992a95f22a40ed4e037234ee9347fa298 (image=quay.io/ceph/ceph:v20, name=bold_austin, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 16:09:29 np0005544708 systemd[1]: Started libpod-conmon-f45dae8ae11895c9efcae88b28783b9992a95f22a40ed4e037234ee9347fa298.scope.
Dec  3 16:09:29 np0005544708 podman[90949]: 2025-12-03 21:09:29.715272087 +0000 UTC m=+0.040187552 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:09:29 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:29 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/164a08ad90a7a2c651afd1989ee4ee019c13822cc260beaf48909ee57096570e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:29 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/164a08ad90a7a2c651afd1989ee4ee019c13822cc260beaf48909ee57096570e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:29 np0005544708 podman[90949]: 2025-12-03 21:09:29.853252587 +0000 UTC m=+0.178168062 container init f45dae8ae11895c9efcae88b28783b9992a95f22a40ed4e037234ee9347fa298 (image=quay.io/ceph/ceph:v20, name=bold_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec  3 16:09:29 np0005544708 podman[90949]: 2025-12-03 21:09:29.86415206 +0000 UTC m=+0.189067445 container start f45dae8ae11895c9efcae88b28783b9992a95f22a40ed4e037234ee9347fa298 (image=quay.io/ceph/ceph:v20, name=bold_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:09:29 np0005544708 podman[90949]: 2025-12-03 21:09:29.86761938 +0000 UTC m=+0.192534795 container attach f45dae8ae11895c9efcae88b28783b9992a95f22a40ed4e037234ee9347fa298 (image=quay.io/ceph/ceph:v20, name=bold_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec  3 16:09:30 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Dec  3 16:09:30 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Dec  3 16:09:30 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Dec  3 16:09:30 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  3 16:09:30 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/319641924' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec  3 16:09:30 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 20 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:09:30 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/319641924' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec  3 16:09:31 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v46: 5 pgs: 5 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:09:31 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Dec  3 16:09:31 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/319641924' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  3 16:09:31 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Dec  3 16:09:31 np0005544708 bold_austin[90964]: pool 'cephfs.cephfs.meta' created
Dec  3 16:09:31 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Dec  3 16:09:31 np0005544708 systemd[1]: libpod-f45dae8ae11895c9efcae88b28783b9992a95f22a40ed4e037234ee9347fa298.scope: Deactivated successfully.
Dec  3 16:09:31 np0005544708 podman[90949]: 2025-12-03 21:09:31.345757048 +0000 UTC m=+1.670672463 container died f45dae8ae11895c9efcae88b28783b9992a95f22a40ed4e037234ee9347fa298 (image=quay.io/ceph/ceph:v20, name=bold_austin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:09:31 np0005544708 systemd[1]: var-lib-containers-storage-overlay-164a08ad90a7a2c651afd1989ee4ee019c13822cc260beaf48909ee57096570e-merged.mount: Deactivated successfully.
Dec  3 16:09:31 np0005544708 podman[90949]: 2025-12-03 21:09:31.400623879 +0000 UTC m=+1.725539264 container remove f45dae8ae11895c9efcae88b28783b9992a95f22a40ed4e037234ee9347fa298 (image=quay.io/ceph/ceph:v20, name=bold_austin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec  3 16:09:31 np0005544708 systemd[1]: libpod-conmon-f45dae8ae11895c9efcae88b28783b9992a95f22a40ed4e037234ee9347fa298.scope: Deactivated successfully.
Dec  3 16:09:31 np0005544708 python3[91028]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid c21de27e-a7fd-594b-8324-0697ba9aab3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:09:31 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e21 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:09:31 np0005544708 podman[91029]: 2025-12-03 21:09:31.783996483 +0000 UTC m=+0.050829809 container create 98105a1a9647bc4a48453dd0ebf6db2b40448b4f58c0cc58b1ef49ada418ba10 (image=quay.io/ceph/ceph:v20, name=condescending_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:09:31 np0005544708 systemd[1]: Started libpod-conmon-98105a1a9647bc4a48453dd0ebf6db2b40448b4f58c0cc58b1ef49ada418ba10.scope.
Dec  3 16:09:31 np0005544708 podman[91029]: 2025-12-03 21:09:31.761786799 +0000 UTC m=+0.028620145 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:09:31 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:31 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43a6d3a35d85bab64c3e5fb22fe52c13c83732fecf2ae50c255e755e2912142a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:31 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43a6d3a35d85bab64c3e5fb22fe52c13c83732fecf2ae50c255e755e2912142a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:31 np0005544708 podman[91029]: 2025-12-03 21:09:31.879425314 +0000 UTC m=+0.146258660 container init 98105a1a9647bc4a48453dd0ebf6db2b40448b4f58c0cc58b1ef49ada418ba10 (image=quay.io/ceph/ceph:v20, name=condescending_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec  3 16:09:31 np0005544708 podman[91029]: 2025-12-03 21:09:31.886919457 +0000 UTC m=+0.153752813 container start 98105a1a9647bc4a48453dd0ebf6db2b40448b4f58c0cc58b1ef49ada418ba10 (image=quay.io/ceph/ceph:v20, name=condescending_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec  3 16:09:31 np0005544708 podman[91029]: 2025-12-03 21:09:31.891205685 +0000 UTC m=+0.158039101 container attach 98105a1a9647bc4a48453dd0ebf6db2b40448b4f58c0cc58b1ef49ada418ba10 (image=quay.io/ceph/ceph:v20, name=condescending_yalow, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 16:09:32 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 21 pg[6.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:09:32 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/319641924' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  3 16:09:32 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Dec  3 16:09:32 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Dec  3 16:09:32 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Dec  3 16:09:32 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  3 16:09:32 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3331846641' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec  3 16:09:32 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 22 pg[6.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:09:33 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v49: 6 pgs: 1 unknown, 5 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:09:33 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Dec  3 16:09:33 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3331846641' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  3 16:09:33 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Dec  3 16:09:33 np0005544708 condescending_yalow[91045]: pool 'cephfs.cephfs.data' created
Dec  3 16:09:33 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/3331846641' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec  3 16:09:33 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Dec  3 16:09:33 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 23 pg[7.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [1] r=0 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:09:33 np0005544708 systemd[1]: libpod-98105a1a9647bc4a48453dd0ebf6db2b40448b4f58c0cc58b1ef49ada418ba10.scope: Deactivated successfully.
Dec  3 16:09:33 np0005544708 podman[91029]: 2025-12-03 21:09:33.382773127 +0000 UTC m=+1.649606473 container died 98105a1a9647bc4a48453dd0ebf6db2b40448b4f58c0cc58b1ef49ada418ba10 (image=quay.io/ceph/ceph:v20, name=condescending_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec  3 16:09:33 np0005544708 systemd[1]: var-lib-containers-storage-overlay-43a6d3a35d85bab64c3e5fb22fe52c13c83732fecf2ae50c255e755e2912142a-merged.mount: Deactivated successfully.
Dec  3 16:09:33 np0005544708 podman[91029]: 2025-12-03 21:09:33.436458574 +0000 UTC m=+1.703291910 container remove 98105a1a9647bc4a48453dd0ebf6db2b40448b4f58c0cc58b1ef49ada418ba10 (image=quay.io/ceph/ceph:v20, name=condescending_yalow, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  3 16:09:33 np0005544708 systemd[1]: libpod-conmon-98105a1a9647bc4a48453dd0ebf6db2b40448b4f58c0cc58b1ef49ada418ba10.scope: Deactivated successfully.
Dec  3 16:09:33 np0005544708 python3[91109]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid c21de27e-a7fd-594b-8324-0697ba9aab3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:09:33 np0005544708 podman[91110]: 2025-12-03 21:09:33.927822576 +0000 UTC m=+0.061076589 container create b3bb9742da48102bc3b83ea2bf085d0529efbe0ee844728d36e1860b3bb90386 (image=quay.io/ceph/ceph:v20, name=nice_albattani, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec  3 16:09:33 np0005544708 systemd[1]: Started libpod-conmon-b3bb9742da48102bc3b83ea2bf085d0529efbe0ee844728d36e1860b3bb90386.scope.
Dec  3 16:09:33 np0005544708 podman[91110]: 2025-12-03 21:09:33.902808635 +0000 UTC m=+0.036062738 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:09:34 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:34 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0304529cd862cdd724b2058e7c2d2f47178c9de204f4a57a0f5d98e2cfaae6a5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:34 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0304529cd862cdd724b2058e7c2d2f47178c9de204f4a57a0f5d98e2cfaae6a5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:34 np0005544708 podman[91110]: 2025-12-03 21:09:34.022366257 +0000 UTC m=+0.155620310 container init b3bb9742da48102bc3b83ea2bf085d0529efbe0ee844728d36e1860b3bb90386 (image=quay.io/ceph/ceph:v20, name=nice_albattani, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec  3 16:09:34 np0005544708 podman[91110]: 2025-12-03 21:09:34.032511745 +0000 UTC m=+0.165765758 container start b3bb9742da48102bc3b83ea2bf085d0529efbe0ee844728d36e1860b3bb90386 (image=quay.io/ceph/ceph:v20, name=nice_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 16:09:34 np0005544708 podman[91110]: 2025-12-03 21:09:34.044837677 +0000 UTC m=+0.178091690 container attach b3bb9742da48102bc3b83ea2bf085d0529efbe0ee844728d36e1860b3bb90386 (image=quay.io/ceph/ceph:v20, name=nice_albattani, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  3 16:09:34 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Dec  3 16:09:34 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Dec  3 16:09:34 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Dec  3 16:09:34 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/3331846641' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  3 16:09:34 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 24 pg[7.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [1] r=0 lpr=23 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:09:34 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Dec  3 16:09:34 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3010214953' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Dec  3 16:09:35 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v52: 7 pgs: 2 unknown, 5 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:09:35 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Dec  3 16:09:35 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3010214953' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec  3 16:09:35 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Dec  3 16:09:35 np0005544708 nice_albattani[91125]: enabled application 'rbd' on pool 'vms'
Dec  3 16:09:35 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Dec  3 16:09:35 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/3010214953' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Dec  3 16:09:35 np0005544708 systemd[1]: libpod-b3bb9742da48102bc3b83ea2bf085d0529efbe0ee844728d36e1860b3bb90386.scope: Deactivated successfully.
Dec  3 16:09:35 np0005544708 podman[91110]: 2025-12-03 21:09:35.399223815 +0000 UTC m=+1.532477858 container died b3bb9742da48102bc3b83ea2bf085d0529efbe0ee844728d36e1860b3bb90386 (image=quay.io/ceph/ceph:v20, name=nice_albattani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle)
Dec  3 16:09:35 np0005544708 systemd[1]: var-lib-containers-storage-overlay-0304529cd862cdd724b2058e7c2d2f47178c9de204f4a57a0f5d98e2cfaae6a5-merged.mount: Deactivated successfully.
Dec  3 16:09:35 np0005544708 podman[91110]: 2025-12-03 21:09:35.442478829 +0000 UTC m=+1.575732882 container remove b3bb9742da48102bc3b83ea2bf085d0529efbe0ee844728d36e1860b3bb90386 (image=quay.io/ceph/ceph:v20, name=nice_albattani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:09:35 np0005544708 systemd[1]: libpod-conmon-b3bb9742da48102bc3b83ea2bf085d0529efbe0ee844728d36e1860b3bb90386.scope: Deactivated successfully.
Dec  3 16:09:35 np0005544708 python3[91187]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid c21de27e-a7fd-594b-8324-0697ba9aab3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:09:35 np0005544708 podman[91188]: 2025-12-03 21:09:35.824992437 +0000 UTC m=+0.065199334 container create 294c8c6ea310d753c924e4118af2df15110800b4d546d7017cb404d494fc8c6c (image=quay.io/ceph/ceph:v20, name=hopeful_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:09:35 np0005544708 systemd[1]: Started libpod-conmon-294c8c6ea310d753c924e4118af2df15110800b4d546d7017cb404d494fc8c6c.scope.
Dec  3 16:09:35 np0005544708 podman[91188]: 2025-12-03 21:09:35.791227157 +0000 UTC m=+0.031434094 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:09:35 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:35 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e69572a64d0c43f7ca3380e3e083fc2e1c044d1da2a81b872cb1df0fe473e059/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:35 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e69572a64d0c43f7ca3380e3e083fc2e1c044d1da2a81b872cb1df0fe473e059/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:35 np0005544708 podman[91188]: 2025-12-03 21:09:35.912337921 +0000 UTC m=+0.152544878 container init 294c8c6ea310d753c924e4118af2df15110800b4d546d7017cb404d494fc8c6c (image=quay.io/ceph/ceph:v20, name=hopeful_wilson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:09:35 np0005544708 podman[91188]: 2025-12-03 21:09:35.92210489 +0000 UTC m=+0.162311797 container start 294c8c6ea310d753c924e4118af2df15110800b4d546d7017cb404d494fc8c6c (image=quay.io/ceph/ceph:v20, name=hopeful_wilson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 16:09:35 np0005544708 podman[91188]: 2025-12-03 21:09:35.929346259 +0000 UTC m=+0.169553196 container attach 294c8c6ea310d753c924e4118af2df15110800b4d546d7017cb404d494fc8c6c (image=quay.io/ceph/ceph:v20, name=hopeful_wilson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:09:36 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Dec  3 16:09:36 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3790791762' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Dec  3 16:09:36 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Dec  3 16:09:36 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/3010214953' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec  3 16:09:36 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/3790791762' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Dec  3 16:09:36 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3790791762' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec  3 16:09:36 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Dec  3 16:09:36 np0005544708 hopeful_wilson[91203]: enabled application 'rbd' on pool 'volumes'
Dec  3 16:09:36 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Dec  3 16:09:36 np0005544708 systemd[1]: libpod-294c8c6ea310d753c924e4118af2df15110800b4d546d7017cb404d494fc8c6c.scope: Deactivated successfully.
Dec  3 16:09:36 np0005544708 podman[91228]: 2025-12-03 21:09:36.471389947 +0000 UTC m=+0.034273003 container died 294c8c6ea310d753c924e4118af2df15110800b4d546d7017cb404d494fc8c6c (image=quay.io/ceph/ceph:v20, name=hopeful_wilson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec  3 16:09:36 np0005544708 systemd[1]: var-lib-containers-storage-overlay-e69572a64d0c43f7ca3380e3e083fc2e1c044d1da2a81b872cb1df0fe473e059-merged.mount: Deactivated successfully.
Dec  3 16:09:36 np0005544708 podman[91228]: 2025-12-03 21:09:36.521296286 +0000 UTC m=+0.084179312 container remove 294c8c6ea310d753c924e4118af2df15110800b4d546d7017cb404d494fc8c6c (image=quay.io/ceph/ceph:v20, name=hopeful_wilson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2)
Dec  3 16:09:36 np0005544708 systemd[1]: libpod-conmon-294c8c6ea310d753c924e4118af2df15110800b4d546d7017cb404d494fc8c6c.scope: Deactivated successfully.
Dec  3 16:09:36 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:09:36 np0005544708 python3[91268]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid c21de27e-a7fd-594b-8324-0697ba9aab3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:09:36 np0005544708 podman[91269]: 2025-12-03 21:09:36.964367631 +0000 UTC m=+0.071582774 container create d213637b430e61f59ca05a3f9d08b766e2a8b8f55545a0090de9ab1f93ffbbcf (image=quay.io/ceph/ceph:v20, name=adoring_mendel, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  3 16:09:37 np0005544708 systemd[1]: Started libpod-conmon-d213637b430e61f59ca05a3f9d08b766e2a8b8f55545a0090de9ab1f93ffbbcf.scope.
Dec  3 16:09:37 np0005544708 podman[91269]: 2025-12-03 21:09:36.935788476 +0000 UTC m=+0.043003629 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:09:37 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:37 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74ef1132576152a5f18c5265cc577e0714f82c837a9e162b873e93c070ba1db0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:37 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74ef1132576152a5f18c5265cc577e0714f82c837a9e162b873e93c070ba1db0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:37 np0005544708 podman[91269]: 2025-12-03 21:09:37.067354405 +0000 UTC m=+0.174569568 container init d213637b430e61f59ca05a3f9d08b766e2a8b8f55545a0090de9ab1f93ffbbcf (image=quay.io/ceph/ceph:v20, name=adoring_mendel, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec  3 16:09:37 np0005544708 podman[91269]: 2025-12-03 21:09:37.075104674 +0000 UTC m=+0.182319787 container start d213637b430e61f59ca05a3f9d08b766e2a8b8f55545a0090de9ab1f93ffbbcf (image=quay.io/ceph/ceph:v20, name=adoring_mendel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True)
Dec  3 16:09:37 np0005544708 podman[91269]: 2025-12-03 21:09:37.078939952 +0000 UTC m=+0.186155155 container attach d213637b430e61f59ca05a3f9d08b766e2a8b8f55545a0090de9ab1f93ffbbcf (image=quay.io/ceph/ceph:v20, name=adoring_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec  3 16:09:37 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v55: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:09:37 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/3790791762' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec  3 16:09:37 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Dec  3 16:09:37 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3350298550' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Dec  3 16:09:38 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Dec  3 16:09:38 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/3350298550' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Dec  3 16:09:38 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3350298550' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec  3 16:09:38 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Dec  3 16:09:38 np0005544708 adoring_mendel[91284]: enabled application 'rbd' on pool 'backups'
Dec  3 16:09:38 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Dec  3 16:09:38 np0005544708 systemd[1]: libpod-d213637b430e61f59ca05a3f9d08b766e2a8b8f55545a0090de9ab1f93ffbbcf.scope: Deactivated successfully.
Dec  3 16:09:38 np0005544708 podman[91269]: 2025-12-03 21:09:38.457886633 +0000 UTC m=+1.565101786 container died d213637b430e61f59ca05a3f9d08b766e2a8b8f55545a0090de9ab1f93ffbbcf (image=quay.io/ceph/ceph:v20, name=adoring_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:09:38 np0005544708 systemd[1]: var-lib-containers-storage-overlay-74ef1132576152a5f18c5265cc577e0714f82c837a9e162b873e93c070ba1db0-merged.mount: Deactivated successfully.
Dec  3 16:09:38 np0005544708 podman[91269]: 2025-12-03 21:09:38.510472817 +0000 UTC m=+1.617687960 container remove d213637b430e61f59ca05a3f9d08b766e2a8b8f55545a0090de9ab1f93ffbbcf (image=quay.io/ceph/ceph:v20, name=adoring_mendel, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  3 16:09:38 np0005544708 systemd[1]: libpod-conmon-d213637b430e61f59ca05a3f9d08b766e2a8b8f55545a0090de9ab1f93ffbbcf.scope: Deactivated successfully.
Dec  3 16:09:38 np0005544708 python3[91344]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid c21de27e-a7fd-594b-8324-0697ba9aab3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:09:38 np0005544708 podman[91345]: 2025-12-03 21:09:38.951187624 +0000 UTC m=+0.055361693 container create 4dc9952fa32780e8267a8db0171891a74b804bffb1fc84240685e030cfd1bcf4 (image=quay.io/ceph/ceph:v20, name=blissful_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  3 16:09:38 np0005544708 systemd[1]: Started libpod-conmon-4dc9952fa32780e8267a8db0171891a74b804bffb1fc84240685e030cfd1bcf4.scope.
Dec  3 16:09:39 np0005544708 podman[91345]: 2025-12-03 21:09:38.92216705 +0000 UTC m=+0.026341209 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:09:39 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:39 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7663e56f9978917969dfd5974a0b6df01a7041aba8e283f1dadc72f3a9ce210/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:39 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7663e56f9978917969dfd5974a0b6df01a7041aba8e283f1dadc72f3a9ce210/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:39 np0005544708 podman[91345]: 2025-12-03 21:09:39.040474499 +0000 UTC m=+0.144648608 container init 4dc9952fa32780e8267a8db0171891a74b804bffb1fc84240685e030cfd1bcf4 (image=quay.io/ceph/ceph:v20, name=blissful_goldberg, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec  3 16:09:39 np0005544708 podman[91345]: 2025-12-03 21:09:39.045674804 +0000 UTC m=+0.149848903 container start 4dc9952fa32780e8267a8db0171891a74b804bffb1fc84240685e030cfd1bcf4 (image=quay.io/ceph/ceph:v20, name=blissful_goldberg, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:09:39 np0005544708 podman[91345]: 2025-12-03 21:09:39.04987489 +0000 UTC m=+0.154048989 container attach 4dc9952fa32780e8267a8db0171891a74b804bffb1fc84240685e030cfd1bcf4 (image=quay.io/ceph/ceph:v20, name=blissful_goldberg, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec  3 16:09:39 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v57: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:09:39 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/3350298550' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec  3 16:09:39 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Dec  3 16:09:39 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2912358146' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Dec  3 16:09:40 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Dec  3 16:09:40 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/2912358146' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Dec  3 16:09:40 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2912358146' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec  3 16:09:40 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Dec  3 16:09:40 np0005544708 blissful_goldberg[91361]: enabled application 'rbd' on pool 'images'
Dec  3 16:09:40 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Dec  3 16:09:40 np0005544708 systemd[1]: libpod-4dc9952fa32780e8267a8db0171891a74b804bffb1fc84240685e030cfd1bcf4.scope: Deactivated successfully.
Dec  3 16:09:40 np0005544708 podman[91386]: 2025-12-03 21:09:40.526724991 +0000 UTC m=+0.027260148 container died 4dc9952fa32780e8267a8db0171891a74b804bffb1fc84240685e030cfd1bcf4 (image=quay.io/ceph/ceph:v20, name=blissful_goldberg, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:09:40 np0005544708 systemd[1]: var-lib-containers-storage-overlay-d7663e56f9978917969dfd5974a0b6df01a7041aba8e283f1dadc72f3a9ce210-merged.mount: Deactivated successfully.
Dec  3 16:09:40 np0005544708 podman[91386]: 2025-12-03 21:09:40.568289541 +0000 UTC m=+0.068824588 container remove 4dc9952fa32780e8267a8db0171891a74b804bffb1fc84240685e030cfd1bcf4 (image=quay.io/ceph/ceph:v20, name=blissful_goldberg, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:09:40 np0005544708 systemd[1]: libpod-conmon-4dc9952fa32780e8267a8db0171891a74b804bffb1fc84240685e030cfd1bcf4.scope: Deactivated successfully.
Dec  3 16:09:40 np0005544708 python3[91427]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid c21de27e-a7fd-594b-8324-0697ba9aab3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:09:40 np0005544708 podman[91428]: 2025-12-03 21:09:40.964091469 +0000 UTC m=+0.073181046 container create 021b13ecd467cd08702f8c1371605bd1187ca3f654f132d408630cb3c4ff08c9 (image=quay.io/ceph/ceph:v20, name=laughing_williams, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec  3 16:09:41 np0005544708 systemd[1]: Started libpod-conmon-021b13ecd467cd08702f8c1371605bd1187ca3f654f132d408630cb3c4ff08c9.scope.
Dec  3 16:09:41 np0005544708 podman[91428]: 2025-12-03 21:09:40.930304329 +0000 UTC m=+0.039393936 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:09:41 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:41 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a6019fbea4ee003eef82b1e5d7f1d7c259a18f7abdb8d1d0cae8ce633b5d133/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:41 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a6019fbea4ee003eef82b1e5d7f1d7c259a18f7abdb8d1d0cae8ce633b5d133/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:41 np0005544708 podman[91428]: 2025-12-03 21:09:41.073062596 +0000 UTC m=+0.182152233 container init 021b13ecd467cd08702f8c1371605bd1187ca3f654f132d408630cb3c4ff08c9 (image=quay.io/ceph/ceph:v20, name=laughing_williams, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:09:41 np0005544708 podman[91428]: 2025-12-03 21:09:41.078945706 +0000 UTC m=+0.188035273 container start 021b13ecd467cd08702f8c1371605bd1187ca3f654f132d408630cb3c4ff08c9 (image=quay.io/ceph/ceph:v20, name=laughing_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:09:41 np0005544708 podman[91428]: 2025-12-03 21:09:41.083376427 +0000 UTC m=+0.192466034 container attach 021b13ecd467cd08702f8c1371605bd1187ca3f654f132d408630cb3c4ff08c9 (image=quay.io/ceph/ceph:v20, name=laughing_williams, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec  3 16:09:41 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v59: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:09:41 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/2912358146' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec  3 16:09:41 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Dec  3 16:09:41 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2642974806' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Dec  3 16:09:41 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:09:42 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Dec  3 16:09:42 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/2642974806' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Dec  3 16:09:42 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2642974806' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec  3 16:09:42 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Dec  3 16:09:42 np0005544708 laughing_williams[91444]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Dec  3 16:09:42 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Dec  3 16:09:42 np0005544708 systemd[1]: libpod-021b13ecd467cd08702f8c1371605bd1187ca3f654f132d408630cb3c4ff08c9.scope: Deactivated successfully.
Dec  3 16:09:42 np0005544708 podman[91428]: 2025-12-03 21:09:42.520955266 +0000 UTC m=+1.630044803 container died 021b13ecd467cd08702f8c1371605bd1187ca3f654f132d408630cb3c4ff08c9 (image=quay.io/ceph/ceph:v20, name=laughing_williams, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 16:09:42 np0005544708 systemd[1]: var-lib-containers-storage-overlay-6a6019fbea4ee003eef82b1e5d7f1d7c259a18f7abdb8d1d0cae8ce633b5d133-merged.mount: Deactivated successfully.
Dec  3 16:09:42 np0005544708 podman[91428]: 2025-12-03 21:09:42.741760289 +0000 UTC m=+1.850849856 container remove 021b13ecd467cd08702f8c1371605bd1187ca3f654f132d408630cb3c4ff08c9 (image=quay.io/ceph/ceph:v20, name=laughing_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec  3 16:09:42 np0005544708 systemd[1]: libpod-conmon-021b13ecd467cd08702f8c1371605bd1187ca3f654f132d408630cb3c4ff08c9.scope: Deactivated successfully.
Dec  3 16:09:43 np0005544708 python3[91506]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid c21de27e-a7fd-594b-8324-0697ba9aab3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:09:43 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v61: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:09:43 np0005544708 podman[91507]: 2025-12-03 21:09:43.183493407 +0000 UTC m=+0.042367358 container create 890e56f16dde7fa1ec7268c563668d8bd6bbaa254cd2c09ced77f4f2f3cacfb5 (image=quay.io/ceph/ceph:v20, name=trusting_montalcini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 16:09:43 np0005544708 systemd[1]: Started libpod-conmon-890e56f16dde7fa1ec7268c563668d8bd6bbaa254cd2c09ced77f4f2f3cacfb5.scope.
Dec  3 16:09:43 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:43 np0005544708 podman[91507]: 2025-12-03 21:09:43.164630771 +0000 UTC m=+0.023504742 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:09:43 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e2d1c1c5b33d6d91649d768fd9226eafbbd4c87fa032d31c3ed14fbf09379d2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:43 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e2d1c1c5b33d6d91649d768fd9226eafbbd4c87fa032d31c3ed14fbf09379d2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:43 np0005544708 podman[91507]: 2025-12-03 21:09:43.279457308 +0000 UTC m=+0.138331299 container init 890e56f16dde7fa1ec7268c563668d8bd6bbaa254cd2c09ced77f4f2f3cacfb5 (image=quay.io/ceph/ceph:v20, name=trusting_montalcini, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 16:09:43 np0005544708 podman[91507]: 2025-12-03 21:09:43.289938522 +0000 UTC m=+0.148812483 container start 890e56f16dde7fa1ec7268c563668d8bd6bbaa254cd2c09ced77f4f2f3cacfb5 (image=quay.io/ceph/ceph:v20, name=trusting_montalcini, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec  3 16:09:43 np0005544708 podman[91507]: 2025-12-03 21:09:43.294112617 +0000 UTC m=+0.152986578 container attach 890e56f16dde7fa1ec7268c563668d8bd6bbaa254cd2c09ced77f4f2f3cacfb5 (image=quay.io/ceph/ceph:v20, name=trusting_montalcini, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 16:09:43 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/2642974806' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec  3 16:09:43 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Dec  3 16:09:43 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/915051798' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Dec  3 16:09:44 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Dec  3 16:09:44 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/915051798' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec  3 16:09:44 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Dec  3 16:09:44 np0005544708 trusting_montalcini[91523]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Dec  3 16:09:44 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Dec  3 16:09:44 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/915051798' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Dec  3 16:09:44 np0005544708 systemd[1]: libpod-890e56f16dde7fa1ec7268c563668d8bd6bbaa254cd2c09ced77f4f2f3cacfb5.scope: Deactivated successfully.
Dec  3 16:09:44 np0005544708 podman[91548]: 2025-12-03 21:09:44.570543183 +0000 UTC m=+0.023738977 container died 890e56f16dde7fa1ec7268c563668d8bd6bbaa254cd2c09ced77f4f2f3cacfb5 (image=quay.io/ceph/ceph:v20, name=trusting_montalcini, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Dec  3 16:09:44 np0005544708 systemd[1]: var-lib-containers-storage-overlay-7e2d1c1c5b33d6d91649d768fd9226eafbbd4c87fa032d31c3ed14fbf09379d2-merged.mount: Deactivated successfully.
Dec  3 16:09:44 np0005544708 podman[91548]: 2025-12-03 21:09:44.608775214 +0000 UTC m=+0.061970988 container remove 890e56f16dde7fa1ec7268c563668d8bd6bbaa254cd2c09ced77f4f2f3cacfb5 (image=quay.io/ceph/ceph:v20, name=trusting_montalcini, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:09:44 np0005544708 systemd[1]: libpod-conmon-890e56f16dde7fa1ec7268c563668d8bd6bbaa254cd2c09ced77f4f2f3cacfb5.scope: Deactivated successfully.
Dec  3 16:09:45 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v63: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:09:45 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/915051798' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec  3 16:09:46 np0005544708 python3[91638]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 16:09:46 np0005544708 python3[91709]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764796186.0829332-36776-133040734642724/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:09:46 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:09:47 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v64: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:09:47 np0005544708 python3[91759]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid c21de27e-a7fd-594b-8324-0697ba9aab3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:09:47 np0005544708 podman[91760]: 2025-12-03 21:09:47.273562422 +0000 UTC m=+0.060637470 container create 79ac7de97b977ed2f279210e9f9f4176261410db8da1ce979cf352b291b5ed1a (image=quay.io/ceph/ceph:v20, name=jovial_villani, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  3 16:09:47 np0005544708 systemd[1]: Started libpod-conmon-79ac7de97b977ed2f279210e9f9f4176261410db8da1ce979cf352b291b5ed1a.scope.
Dec  3 16:09:47 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:47 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e893d492b7f55d3ba56ad358af7638f766b13b53f8695eaf97217c44adcef1bf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:47 np0005544708 podman[91760]: 2025-12-03 21:09:47.252878999 +0000 UTC m=+0.039954097 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:09:47 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e893d492b7f55d3ba56ad358af7638f766b13b53f8695eaf97217c44adcef1bf/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:47 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e893d492b7f55d3ba56ad358af7638f766b13b53f8695eaf97217c44adcef1bf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:47 np0005544708 podman[91760]: 2025-12-03 21:09:47.362732494 +0000 UTC m=+0.149807562 container init 79ac7de97b977ed2f279210e9f9f4176261410db8da1ce979cf352b291b5ed1a (image=quay.io/ceph/ceph:v20, name=jovial_villani, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 16:09:47 np0005544708 podman[91760]: 2025-12-03 21:09:47.37085703 +0000 UTC m=+0.157932078 container start 79ac7de97b977ed2f279210e9f9f4176261410db8da1ce979cf352b291b5ed1a (image=quay.io/ceph/ceph:v20, name=jovial_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 16:09:47 np0005544708 podman[91760]: 2025-12-03 21:09:47.374860722 +0000 UTC m=+0.161935790 container attach 79ac7de97b977ed2f279210e9f9f4176261410db8da1ce979cf352b291b5ed1a (image=quay.io/ceph/ceph:v20, name=jovial_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 16:09:47 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14230 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:09:47 np0005544708 ceph-mgr[75500]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec  3 16:09:47 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Dec  3 16:09:47 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Dec  3 16:09:47 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Dec  3 16:09:47 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Dec  3 16:09:47 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Dec  3 16:09:47 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Dec  3 16:09:47 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Dec  3 16:09:47 np0005544708 ceph-mon[75204]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec  3 16:09:47 np0005544708 ceph-mon[75204]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec  3 16:09:47 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mon-compute-0[75200]: 2025-12-03T21:09:47.859+0000 7f6ce6e09640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec  3 16:09:47 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec  3 16:09:47 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).mds e2 new map
Dec  3 16:09:47 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).mds e2 print_map#012e2#012btime 2025-12-03T21:09:47:861269+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-03T21:09:47.860950+0000#012modified#0112025-12-03T21:09:47.860950+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 
Dec  3 16:09:47 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Dec  3 16:09:47 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Dec  3 16:09:47 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Dec  3 16:09:47 np0005544708 ceph-mgr[75500]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Dec  3 16:09:47 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Dec  3 16:09:47 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec  3 16:09:47 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:47 np0005544708 ceph-mgr[75500]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec  3 16:09:47 np0005544708 systemd[1]: libpod-79ac7de97b977ed2f279210e9f9f4176261410db8da1ce979cf352b291b5ed1a.scope: Deactivated successfully.
Dec  3 16:09:47 np0005544708 podman[91760]: 2025-12-03 21:09:47.916231626 +0000 UTC m=+0.703306674 container died 79ac7de97b977ed2f279210e9f9f4176261410db8da1ce979cf352b291b5ed1a (image=quay.io/ceph/ceph:v20, name=jovial_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:09:47 np0005544708 systemd[1]: var-lib-containers-storage-overlay-e893d492b7f55d3ba56ad358af7638f766b13b53f8695eaf97217c44adcef1bf-merged.mount: Deactivated successfully.
Dec  3 16:09:47 np0005544708 podman[91760]: 2025-12-03 21:09:47.956897667 +0000 UTC m=+0.743972715 container remove 79ac7de97b977ed2f279210e9f9f4176261410db8da1ce979cf352b291b5ed1a (image=quay.io/ceph/ceph:v20, name=jovial_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:09:47 np0005544708 systemd[1]: libpod-conmon-79ac7de97b977ed2f279210e9f9f4176261410db8da1ce979cf352b291b5ed1a.scope: Deactivated successfully.
Dec  3 16:09:48 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Dec  3 16:09:48 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Dec  3 16:09:48 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Dec  3 16:09:48 np0005544708 ceph-mon[75204]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec  3 16:09:48 np0005544708 ceph-mon[75204]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec  3 16:09:48 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec  3 16:09:48 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:48 np0005544708 python3[91888]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid c21de27e-a7fd-594b-8324-0697ba9aab3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:09:48 np0005544708 podman[91905]: 2025-12-03 21:09:48.34409522 +0000 UTC m=+0.049800309 container create d565a0a2625fe5bc38cf741f326525219ac4d31bee60a8628a876a29d065e491 (image=quay.io/ceph/ceph:v20, name=nostalgic_khorana, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec  3 16:09:48 np0005544708 systemd[1]: Started libpod-conmon-d565a0a2625fe5bc38cf741f326525219ac4d31bee60a8628a876a29d065e491.scope.
Dec  3 16:09:48 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:48 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf1019f02f920b9c4716f66f94920cf55969daffb7fad6d8702de4bb3fe4881/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:48 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf1019f02f920b9c4716f66f94920cf55969daffb7fad6d8702de4bb3fe4881/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:48 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf1019f02f920b9c4716f66f94920cf55969daffb7fad6d8702de4bb3fe4881/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:48 np0005544708 podman[91905]: 2025-12-03 21:09:48.320455056 +0000 UTC m=+0.026160165 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:09:48 np0005544708 podman[91905]: 2025-12-03 21:09:48.417656503 +0000 UTC m=+0.123361612 container init d565a0a2625fe5bc38cf741f326525219ac4d31bee60a8628a876a29d065e491 (image=quay.io/ceph/ceph:v20, name=nostalgic_khorana, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 16:09:48 np0005544708 podman[91905]: 2025-12-03 21:09:48.424663656 +0000 UTC m=+0.130368745 container start d565a0a2625fe5bc38cf741f326525219ac4d31bee60a8628a876a29d065e491 (image=quay.io/ceph/ceph:v20, name=nostalgic_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec  3 16:09:48 np0005544708 podman[91905]: 2025-12-03 21:09:48.429071756 +0000 UTC m=+0.134776875 container attach d565a0a2625fe5bc38cf741f326525219ac4d31bee60a8628a876a29d065e491 (image=quay.io/ceph/ceph:v20, name=nostalgic_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec  3 16:09:48 np0005544708 podman[91949]: 2025-12-03 21:09:48.467492962 +0000 UTC m=+0.055812492 container exec 5be1cf87f4450717250955c63021ef41dba8c1099adb634f3e1f6a0b130fa916 (image=quay.io/ceph/ceph:v20, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mon-compute-0, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  3 16:09:48 np0005544708 podman[91949]: 2025-12-03 21:09:48.587970544 +0000 UTC m=+0.176290104 container exec_died 5be1cf87f4450717250955c63021ef41dba8c1099adb634f3e1f6a0b130fa916 (image=quay.io/ceph/ceph:v20, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:09:48 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14232 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:09:48 np0005544708 ceph-mgr[75500]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Dec  3 16:09:48 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Dec  3 16:09:48 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec  3 16:09:48 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:48 np0005544708 nostalgic_khorana[91946]: Scheduled mds.cephfs update...
Dec  3 16:09:48 np0005544708 systemd[1]: libpod-d565a0a2625fe5bc38cf741f326525219ac4d31bee60a8628a876a29d065e491.scope: Deactivated successfully.
Dec  3 16:09:48 np0005544708 podman[91905]: 2025-12-03 21:09:48.854071532 +0000 UTC m=+0.559776621 container died d565a0a2625fe5bc38cf741f326525219ac4d31bee60a8628a876a29d065e491 (image=quay.io/ceph/ceph:v20, name=nostalgic_khorana, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 16:09:48 np0005544708 systemd[1]: var-lib-containers-storage-overlay-adf1019f02f920b9c4716f66f94920cf55969daffb7fad6d8702de4bb3fe4881-merged.mount: Deactivated successfully.
Dec  3 16:09:48 np0005544708 podman[91905]: 2025-12-03 21:09:48.892373294 +0000 UTC m=+0.598078383 container remove d565a0a2625fe5bc38cf741f326525219ac4d31bee60a8628a876a29d065e491 (image=quay.io/ceph/ceph:v20, name=nostalgic_khorana, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:09:48 np0005544708 systemd[1]: libpod-conmon-d565a0a2625fe5bc38cf741f326525219ac4d31bee60a8628a876a29d065e491.scope: Deactivated successfully.
Dec  3 16:09:49 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v66: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:09:49 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:09:49 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:49 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:09:49 np0005544708 ceph-mon[75204]: Saving service mds.cephfs spec with placement compute-0
Dec  3 16:09:49 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:49 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:49 np0005544708 python3[92264]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  3 16:09:49 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:09:49 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:09:49 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  3 16:09:49 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:09:49 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  3 16:09:49 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:49 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  3 16:09:49 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  3 16:09:49 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  3 16:09:49 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:09:49 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:09:49 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:09:50 np0005544708 python3[92356]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764796189.2709978-36806-224620764690427/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=100907596fddba72a04e8a16770dbec161f9317a backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:09:50 np0005544708 ceph-mon[75204]: Saving service mds.cephfs spec with placement compute-0
Dec  3 16:09:50 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:50 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:50 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:09:50 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:50 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:09:50 np0005544708 podman[92481]: 2025-12-03 21:09:50.439682276 +0000 UTC m=+0.050324470 container create 5740b1215d7467655a6c54f13f602763e8ee3ec6f745dc4f258205ec8b4bf54d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_einstein, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030)
Dec  3 16:09:50 np0005544708 systemd[1]: Started libpod-conmon-5740b1215d7467655a6c54f13f602763e8ee3ec6f745dc4f258205ec8b4bf54d.scope.
Dec  3 16:09:50 np0005544708 python3[92480]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid c21de27e-a7fd-594b-8324-0697ba9aab3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:09:50 np0005544708 podman[92481]: 2025-12-03 21:09:50.416284437 +0000 UTC m=+0.026926651 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:09:50 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:50 np0005544708 podman[92481]: 2025-12-03 21:09:50.543105519 +0000 UTC m=+0.153747803 container init 5740b1215d7467655a6c54f13f602763e8ee3ec6f745dc4f258205ec8b4bf54d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_einstein, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:09:50 np0005544708 podman[92481]: 2025-12-03 21:09:50.554616054 +0000 UTC m=+0.165258248 container start 5740b1215d7467655a6c54f13f602763e8ee3ec6f745dc4f258205ec8b4bf54d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_einstein, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  3 16:09:50 np0005544708 podman[92481]: 2025-12-03 21:09:50.559859932 +0000 UTC m=+0.170502226 container attach 5740b1215d7467655a6c54f13f602763e8ee3ec6f745dc4f258205ec8b4bf54d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_einstein, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:09:50 np0005544708 mystifying_einstein[92497]: 167 167
Dec  3 16:09:50 np0005544708 systemd[1]: libpod-5740b1215d7467655a6c54f13f602763e8ee3ec6f745dc4f258205ec8b4bf54d.scope: Deactivated successfully.
Dec  3 16:09:50 np0005544708 podman[92500]: 2025-12-03 21:09:50.565415475 +0000 UTC m=+0.048099863 container create 374c7a614008d8c33432bf5e38faca3998a8e26e2c280cbcfa4c56df06633d22 (image=quay.io/ceph/ceph:v20, name=keen_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:09:50 np0005544708 podman[92481]: 2025-12-03 21:09:50.570173592 +0000 UTC m=+0.180815826 container died 5740b1215d7467655a6c54f13f602763e8ee3ec6f745dc4f258205ec8b4bf54d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_einstein, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Dec  3 16:09:50 np0005544708 systemd[1]: Started libpod-conmon-374c7a614008d8c33432bf5e38faca3998a8e26e2c280cbcfa4c56df06633d22.scope.
Dec  3 16:09:50 np0005544708 podman[92500]: 2025-12-03 21:09:50.543458616 +0000 UTC m=+0.026142984 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:09:50 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:50 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9040bfa689421ad13cbd409261471e0a026e8261204563f751916271015ddab/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:50 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9040bfa689421ad13cbd409261471e0a026e8261204563f751916271015ddab/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:50 np0005544708 systemd[1]: var-lib-containers-storage-overlay-056c266f4e704909bb8349b35eef7e9aff1de06c6302c86c00d182fdd8871b25-merged.mount: Deactivated successfully.
Dec  3 16:09:51 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:09:51 np0005544708 podman[92481]: 2025-12-03 21:09:51.254955286 +0000 UTC m=+0.865597490 container remove 5740b1215d7467655a6c54f13f602763e8ee3ec6f745dc4f258205ec8b4bf54d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_einstein, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:09:51 np0005544708 systemd[1]: libpod-conmon-5740b1215d7467655a6c54f13f602763e8ee3ec6f745dc4f258205ec8b4bf54d.scope: Deactivated successfully.
Dec  3 16:09:51 np0005544708 podman[92500]: 2025-12-03 21:09:51.307197103 +0000 UTC m=+0.789881521 container init 374c7a614008d8c33432bf5e38faca3998a8e26e2c280cbcfa4c56df06633d22 (image=quay.io/ceph/ceph:v20, name=keen_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:09:51 np0005544708 podman[92500]: 2025-12-03 21:09:51.318964284 +0000 UTC m=+0.801648642 container start 374c7a614008d8c33432bf5e38faca3998a8e26e2c280cbcfa4c56df06633d22 (image=quay.io/ceph/ceph:v20, name=keen_euclid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:09:51 np0005544708 podman[92500]: 2025-12-03 21:09:51.323054268 +0000 UTC m=+0.805738666 container attach 374c7a614008d8c33432bf5e38faca3998a8e26e2c280cbcfa4c56df06633d22 (image=quay.io/ceph/ceph:v20, name=keen_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Dec  3 16:09:51 np0005544708 podman[92538]: 2025-12-03 21:09:51.504174949 +0000 UTC m=+0.053648087 container create fa38c3e1b1477b5ee5355e6ae935494353f818157c1d28d19ad4ac47dfe44321 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_elion, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:09:51 np0005544708 systemd[1]: Started libpod-conmon-fa38c3e1b1477b5ee5355e6ae935494353f818157c1d28d19ad4ac47dfe44321.scope.
Dec  3 16:09:51 np0005544708 podman[92538]: 2025-12-03 21:09:51.482367513 +0000 UTC m=+0.031840611 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:09:51 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:51 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0309fa741b0f22f4ad240444fc8552e562fa2a168bd8ec4ba11bba7e36540e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:51 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0309fa741b0f22f4ad240444fc8552e562fa2a168bd8ec4ba11bba7e36540e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:51 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0309fa741b0f22f4ad240444fc8552e562fa2a168bd8ec4ba11bba7e36540e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:51 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0309fa741b0f22f4ad240444fc8552e562fa2a168bd8ec4ba11bba7e36540e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:51 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0309fa741b0f22f4ad240444fc8552e562fa2a168bd8ec4ba11bba7e36540e4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:51 np0005544708 podman[92538]: 2025-12-03 21:09:51.628978479 +0000 UTC m=+0.178451597 container init fa38c3e1b1477b5ee5355e6ae935494353f818157c1d28d19ad4ac47dfe44321 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:09:51 np0005544708 podman[92538]: 2025-12-03 21:09:51.635964343 +0000 UTC m=+0.185437441 container start fa38c3e1b1477b5ee5355e6ae935494353f818157c1d28d19ad4ac47dfe44321 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_elion, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  3 16:09:51 np0005544708 podman[92538]: 2025-12-03 21:09:51.641142109 +0000 UTC m=+0.190615207 container attach fa38c3e1b1477b5ee5355e6ae935494353f818157c1d28d19ad4ac47dfe44321 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  3 16:09:51 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:09:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:09:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:09:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:09:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:09:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:09:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:09:51 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0)
Dec  3 16:09:51 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/218063241' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Dec  3 16:09:51 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/218063241' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec  3 16:09:51 np0005544708 systemd[1]: libpod-374c7a614008d8c33432bf5e38faca3998a8e26e2c280cbcfa4c56df06633d22.scope: Deactivated successfully.
Dec  3 16:09:51 np0005544708 podman[92500]: 2025-12-03 21:09:51.904354548 +0000 UTC m=+1.387038906 container died 374c7a614008d8c33432bf5e38faca3998a8e26e2c280cbcfa4c56df06633d22 (image=quay.io/ceph/ceph:v20, name=keen_euclid, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  3 16:09:51 np0005544708 systemd[1]: var-lib-containers-storage-overlay-c9040bfa689421ad13cbd409261471e0a026e8261204563f751916271015ddab-merged.mount: Deactivated successfully.
Dec  3 16:09:51 np0005544708 podman[92500]: 2025-12-03 21:09:51.966969327 +0000 UTC m=+1.449653715 container remove 374c7a614008d8c33432bf5e38faca3998a8e26e2c280cbcfa4c56df06633d22 (image=quay.io/ceph/ceph:v20, name=keen_euclid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec  3 16:09:51 np0005544708 systemd[1]: libpod-conmon-374c7a614008d8c33432bf5e38faca3998a8e26e2c280cbcfa4c56df06633d22.scope: Deactivated successfully.
Dec  3 16:09:52 np0005544708 recursing_elion[92573]: --> passed data devices: 0 physical, 3 LVM
Dec  3 16:09:52 np0005544708 recursing_elion[92573]: --> All data devices are unavailable
Dec  3 16:09:52 np0005544708 systemd[1]: libpod-fa38c3e1b1477b5ee5355e6ae935494353f818157c1d28d19ad4ac47dfe44321.scope: Deactivated successfully.
Dec  3 16:09:52 np0005544708 podman[92538]: 2025-12-03 21:09:52.181992231 +0000 UTC m=+0.731465369 container died fa38c3e1b1477b5ee5355e6ae935494353f818157c1d28d19ad4ac47dfe44321 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_elion, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:09:52 np0005544708 systemd[1]: var-lib-containers-storage-overlay-c0309fa741b0f22f4ad240444fc8552e562fa2a168bd8ec4ba11bba7e36540e4-merged.mount: Deactivated successfully.
Dec  3 16:09:52 np0005544708 podman[92538]: 2025-12-03 21:09:52.255712378 +0000 UTC m=+0.805185486 container remove fa38c3e1b1477b5ee5355e6ae935494353f818157c1d28d19ad4ac47dfe44321 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_elion, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec  3 16:09:52 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/218063241' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Dec  3 16:09:52 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/218063241' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec  3 16:09:52 np0005544708 systemd[1]: libpod-conmon-fa38c3e1b1477b5ee5355e6ae935494353f818157c1d28d19ad4ac47dfe44321.scope: Deactivated successfully.
Dec  3 16:09:52 np0005544708 python3[92692]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid c21de27e-a7fd-594b-8324-0697ba9aab3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:09:52 np0005544708 podman[92705]: 2025-12-03 21:09:52.800109483 +0000 UTC m=+0.062277154 container create 79f3a12728fb1a7ea6c7f432d930d4a43f3687b36ff747e71892518fd7e3e5c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bose, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:09:52 np0005544708 podman[92708]: 2025-12-03 21:09:52.811206269 +0000 UTC m=+0.055758760 container create edb29e53ba9e2c8c4f62aac063650f1075ffff88303af2011f12466c7f02e94b (image=quay.io/ceph/ceph:v20, name=peaceful_mirzakhani, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:09:52 np0005544708 systemd[1]: Started libpod-conmon-79f3a12728fb1a7ea6c7f432d930d4a43f3687b36ff747e71892518fd7e3e5c5.scope.
Dec  3 16:09:52 np0005544708 systemd[1]: Started libpod-conmon-edb29e53ba9e2c8c4f62aac063650f1075ffff88303af2011f12466c7f02e94b.scope.
Dec  3 16:09:52 np0005544708 podman[92705]: 2025-12-03 21:09:52.767784993 +0000 UTC m=+0.029952764 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:09:52 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:52 np0005544708 podman[92708]: 2025-12-03 21:09:52.777084202 +0000 UTC m=+0.021636703 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:09:52 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:52 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/704ca13af7f912bf6740fe02e5d08d606cea691ae52e802ccad5b25e41f5ab10/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:52 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/704ca13af7f912bf6740fe02e5d08d606cea691ae52e802ccad5b25e41f5ab10/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:52 np0005544708 podman[92708]: 2025-12-03 21:09:52.887332886 +0000 UTC m=+0.131885387 container init edb29e53ba9e2c8c4f62aac063650f1075ffff88303af2011f12466c7f02e94b (image=quay.io/ceph/ceph:v20, name=peaceful_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec  3 16:09:52 np0005544708 podman[92705]: 2025-12-03 21:09:52.89048207 +0000 UTC m=+0.152649761 container init 79f3a12728fb1a7ea6c7f432d930d4a43f3687b36ff747e71892518fd7e3e5c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bose, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:09:52 np0005544708 podman[92708]: 2025-12-03 21:09:52.893967851 +0000 UTC m=+0.138520332 container start edb29e53ba9e2c8c4f62aac063650f1075ffff88303af2011f12466c7f02e94b (image=quay.io/ceph/ceph:v20, name=peaceful_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:09:52 np0005544708 podman[92705]: 2025-12-03 21:09:52.896192657 +0000 UTC m=+0.158360318 container start 79f3a12728fb1a7ea6c7f432d930d4a43f3687b36ff747e71892518fd7e3e5c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bose, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec  3 16:09:52 np0005544708 podman[92708]: 2025-12-03 21:09:52.897075585 +0000 UTC m=+0.141628066 container attach edb29e53ba9e2c8c4f62aac063650f1075ffff88303af2011f12466c7f02e94b (image=quay.io/ceph/ceph:v20, name=peaceful_mirzakhani, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:09:52 np0005544708 peaceful_bose[92738]: 167 167
Dec  3 16:09:52 np0005544708 systemd[1]: libpod-79f3a12728fb1a7ea6c7f432d930d4a43f3687b36ff747e71892518fd7e3e5c5.scope: Deactivated successfully.
Dec  3 16:09:52 np0005544708 podman[92705]: 2025-12-03 21:09:52.900094136 +0000 UTC m=+0.162261827 container attach 79f3a12728fb1a7ea6c7f432d930d4a43f3687b36ff747e71892518fd7e3e5c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bose, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec  3 16:09:52 np0005544708 podman[92705]: 2025-12-03 21:09:52.900617277 +0000 UTC m=+0.162784938 container died 79f3a12728fb1a7ea6c7f432d930d4a43f3687b36ff747e71892518fd7e3e5c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bose, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:09:52 np0005544708 systemd[1]: var-lib-containers-storage-overlay-3ed81245f586ef4a8e26d901ef0768ca3ca385113faf8c60d43fd4411743e431-merged.mount: Deactivated successfully.
Dec  3 16:09:52 np0005544708 podman[92705]: 2025-12-03 21:09:52.930837185 +0000 UTC m=+0.193004846 container remove 79f3a12728fb1a7ea6c7f432d930d4a43f3687b36ff747e71892518fd7e3e5c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bose, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:09:52 np0005544708 systemd[1]: libpod-conmon-79f3a12728fb1a7ea6c7f432d930d4a43f3687b36ff747e71892518fd7e3e5c5.scope: Deactivated successfully.
Dec  3 16:09:53 np0005544708 podman[92783]: 2025-12-03 21:09:53.083794491 +0000 UTC m=+0.040195243 container create ce3a03a2b83e3e5fe1599f8ce3082bc1f63c120e1f24a21aa3fb118294befb9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_chaplygin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  3 16:09:53 np0005544708 systemd[1]: Started libpod-conmon-ce3a03a2b83e3e5fe1599f8ce3082bc1f63c120e1f24a21aa3fb118294befb9e.scope.
Dec  3 16:09:53 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:09:53 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:53 np0005544708 podman[92783]: 2025-12-03 21:09:53.06416116 +0000 UTC m=+0.020561952 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:09:53 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cf80829b9283003a96a7bdeb6142ac5d6c48a2576de7b8cb529e7232616d01a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:53 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cf80829b9283003a96a7bdeb6142ac5d6c48a2576de7b8cb529e7232616d01a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:53 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cf80829b9283003a96a7bdeb6142ac5d6c48a2576de7b8cb529e7232616d01a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:53 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cf80829b9283003a96a7bdeb6142ac5d6c48a2576de7b8cb529e7232616d01a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:53 np0005544708 podman[92783]: 2025-12-03 21:09:53.17530852 +0000 UTC m=+0.131709322 container init ce3a03a2b83e3e5fe1599f8ce3082bc1f63c120e1f24a21aa3fb118294befb9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_chaplygin, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  3 16:09:53 np0005544708 podman[92783]: 2025-12-03 21:09:53.186291025 +0000 UTC m=+0.142691817 container start ce3a03a2b83e3e5fe1599f8ce3082bc1f63c120e1f24a21aa3fb118294befb9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_chaplygin, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec  3 16:09:53 np0005544708 podman[92783]: 2025-12-03 21:09:53.190259407 +0000 UTC m=+0.146660179 container attach ce3a03a2b83e3e5fe1599f8ce3082bc1f63c120e1f24a21aa3fb118294befb9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 16:09:53 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec  3 16:09:53 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/910190000' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec  3 16:09:53 np0005544708 peaceful_mirzakhani[92740]: 
Dec  3 16:09:53 np0005544708 peaceful_mirzakhani[92740]: {"fsid":"c21de27e-a7fd-594b-8324-0697ba9aab3a","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":111,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":31,"num_osds":3,"num_up_osds":3,"osd_up_since":1764796165,"num_in_osds":3,"osd_in_since":1764796141,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83881984,"bytes_avail":64328044544,"bytes_total":64411926528},"fsmap":{"epoch":2,"btime":"2025-12-03T21:09:47:861269+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-03T21:09:23.137474+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Dec  3 16:09:53 np0005544708 systemd[1]: libpod-edb29e53ba9e2c8c4f62aac063650f1075ffff88303af2011f12466c7f02e94b.scope: Deactivated successfully.
Dec  3 16:09:53 np0005544708 podman[92708]: 2025-12-03 21:09:53.382551676 +0000 UTC m=+0.627104167 container died edb29e53ba9e2c8c4f62aac063650f1075ffff88303af2011f12466c7f02e94b (image=quay.io/ceph/ceph:v20, name=peaceful_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  3 16:09:53 np0005544708 systemd[1]: var-lib-containers-storage-overlay-704ca13af7f912bf6740fe02e5d08d606cea691ae52e802ccad5b25e41f5ab10-merged.mount: Deactivated successfully.
Dec  3 16:09:53 np0005544708 podman[92708]: 2025-12-03 21:09:53.421035232 +0000 UTC m=+0.665587703 container remove edb29e53ba9e2c8c4f62aac063650f1075ffff88303af2011f12466c7f02e94b (image=quay.io/ceph/ceph:v20, name=peaceful_mirzakhani, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  3 16:09:53 np0005544708 systemd[1]: libpod-conmon-edb29e53ba9e2c8c4f62aac063650f1075ffff88303af2011f12466c7f02e94b.scope: Deactivated successfully.
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]: {
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:    "0": [
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:        {
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:            "devices": [
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "/dev/loop3"
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:            ],
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:            "lv_name": "ceph_lv0",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:            "lv_size": "21470642176",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d33bc95-baf8-481d-bc78-3b15ffd29872,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:            "lv_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:            "name": "ceph_lv0",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:            "tags": {
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.block_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.cluster_name": "ceph",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.crush_device_class": "",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.encrypted": "0",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.objectstore": "bluestore",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.osd_fsid": "4d33bc95-baf8-481d-bc78-3b15ffd29872",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.osd_id": "0",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.type": "block",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.vdo": "0",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.with_tpm": "0"
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:            },
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:            "type": "block",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:            "vg_name": "ceph_vg0"
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:        }
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:    ],
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:    "1": [
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:        {
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:            "devices": [
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "/dev/loop4"
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:            ],
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:            "lv_name": "ceph_lv1",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:            "lv_size": "21470642176",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c4086f1b-ff53-4e63-8dc0-011238d77976,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:            "lv_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:            "name": "ceph_lv1",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:            "tags": {
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.block_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.cluster_name": "ceph",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.crush_device_class": "",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.encrypted": "0",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.objectstore": "bluestore",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.osd_fsid": "c4086f1b-ff53-4e63-8dc0-011238d77976",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.osd_id": "1",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.type": "block",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.vdo": "0",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.with_tpm": "0"
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:            },
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:            "type": "block",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:            "vg_name": "ceph_vg1"
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:        }
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:    ],
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:    "2": [
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:        {
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:            "devices": [
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "/dev/loop5"
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:            ],
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:            "lv_name": "ceph_lv2",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:            "lv_size": "21470642176",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=abcd6a67-9013-4470-978f-f75da5f33cd4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:            "lv_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:            "name": "ceph_lv2",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:            "tags": {
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.block_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.cluster_name": "ceph",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.crush_device_class": "",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.encrypted": "0",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.objectstore": "bluestore",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.osd_fsid": "abcd6a67-9013-4470-978f-f75da5f33cd4",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.osd_id": "2",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.type": "block",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.vdo": "0",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:                "ceph.with_tpm": "0"
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:            },
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:            "type": "block",
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:            "vg_name": "ceph_vg2"
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:        }
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]:    ]
Dec  3 16:09:53 np0005544708 vibrant_chaplygin[92799]: }
Dec  3 16:09:53 np0005544708 systemd[1]: libpod-ce3a03a2b83e3e5fe1599f8ce3082bc1f63c120e1f24a21aa3fb118294befb9e.scope: Deactivated successfully.
Dec  3 16:09:53 np0005544708 podman[92783]: 2025-12-03 21:09:53.538013693 +0000 UTC m=+0.494414515 container died ce3a03a2b83e3e5fe1599f8ce3082bc1f63c120e1f24a21aa3fb118294befb9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec  3 16:09:53 np0005544708 podman[92783]: 2025-12-03 21:09:53.587002244 +0000 UTC m=+0.543403006 container remove ce3a03a2b83e3e5fe1599f8ce3082bc1f63c120e1f24a21aa3fb118294befb9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_chaplygin, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:09:53 np0005544708 systemd[1]: libpod-conmon-ce3a03a2b83e3e5fe1599f8ce3082bc1f63c120e1f24a21aa3fb118294befb9e.scope: Deactivated successfully.
Dec  3 16:09:53 np0005544708 systemd[1]: var-lib-containers-storage-overlay-4cf80829b9283003a96a7bdeb6142ac5d6c48a2576de7b8cb529e7232616d01a-merged.mount: Deactivated successfully.
Dec  3 16:09:53 np0005544708 python3[92856]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid c21de27e-a7fd-594b-8324-0697ba9aab3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:09:53 np0005544708 podman[92907]: 2025-12-03 21:09:53.912376343 +0000 UTC m=+0.071959571 container create b7bb3e47733486777c20fd915ee2b6b1ac127a8fe1325ac9726ef8abe89ac3cf (image=quay.io/ceph/ceph:v20, name=crazy_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  3 16:09:53 np0005544708 systemd[1]: Started libpod-conmon-b7bb3e47733486777c20fd915ee2b6b1ac127a8fe1325ac9726ef8abe89ac3cf.scope.
Dec  3 16:09:53 np0005544708 podman[92907]: 2025-12-03 21:09:53.886534325 +0000 UTC m=+0.046117593 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:09:53 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:53 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4590d4f6be376aaa545584357fea44b6465f8802dffd4f7455f1e2d52eb6da3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:53 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4590d4f6be376aaa545584357fea44b6465f8802dffd4f7455f1e2d52eb6da3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:54 np0005544708 podman[92907]: 2025-12-03 21:09:54.255817873 +0000 UTC m=+0.415401181 container init b7bb3e47733486777c20fd915ee2b6b1ac127a8fe1325ac9726ef8abe89ac3cf (image=quay.io/ceph/ceph:v20, name=crazy_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec  3 16:09:54 np0005544708 podman[92907]: 2025-12-03 21:09:54.262697493 +0000 UTC m=+0.422280751 container start b7bb3e47733486777c20fd915ee2b6b1ac127a8fe1325ac9726ef8abe89ac3cf (image=quay.io/ceph/ceph:v20, name=crazy_cartwright, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Dec  3 16:09:54 np0005544708 podman[92907]: 2025-12-03 21:09:54.273645747 +0000 UTC m=+0.433228995 container attach b7bb3e47733486777c20fd915ee2b6b1ac127a8fe1325ac9726ef8abe89ac3cf (image=quay.io/ceph/ceph:v20, name=crazy_cartwright, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  3 16:09:54 np0005544708 podman[92938]: 2025-12-03 21:09:54.395504367 +0000 UTC m=+0.371987423 container create c78068a68f3816f6f13cb086f385da093897ad47ba9f1b0b61d5965fc5d06573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_herschel, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:09:54 np0005544708 systemd[1]: Started libpod-conmon-c78068a68f3816f6f13cb086f385da093897ad47ba9f1b0b61d5965fc5d06573.scope.
Dec  3 16:09:54 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:54 np0005544708 podman[92938]: 2025-12-03 21:09:54.453426472 +0000 UTC m=+0.429909578 container init c78068a68f3816f6f13cb086f385da093897ad47ba9f1b0b61d5965fc5d06573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Dec  3 16:09:54 np0005544708 podman[92938]: 2025-12-03 21:09:54.46022367 +0000 UTC m=+0.436706726 container start c78068a68f3816f6f13cb086f385da093897ad47ba9f1b0b61d5965fc5d06573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_herschel, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  3 16:09:54 np0005544708 ecstatic_herschel[92974]: 167 167
Dec  3 16:09:54 np0005544708 systemd[1]: libpod-c78068a68f3816f6f13cb086f385da093897ad47ba9f1b0b61d5965fc5d06573.scope: Deactivated successfully.
Dec  3 16:09:54 np0005544708 podman[92938]: 2025-12-03 21:09:54.464393945 +0000 UTC m=+0.440877051 container attach c78068a68f3816f6f13cb086f385da093897ad47ba9f1b0b61d5965fc5d06573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_herschel, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:09:54 np0005544708 podman[92938]: 2025-12-03 21:09:54.464763063 +0000 UTC m=+0.441246139 container died c78068a68f3816f6f13cb086f385da093897ad47ba9f1b0b61d5965fc5d06573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_herschel, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:09:54 np0005544708 podman[92938]: 2025-12-03 21:09:54.378049801 +0000 UTC m=+0.354532907 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:09:54 np0005544708 systemd[1]: var-lib-containers-storage-overlay-66a0c8ec7c0e4f063e57a6f3d5e876729ca264f615fe685276abef8350d47ab2-merged.mount: Deactivated successfully.
Dec  3 16:09:54 np0005544708 podman[92938]: 2025-12-03 21:09:54.510475507 +0000 UTC m=+0.486958583 container remove c78068a68f3816f6f13cb086f385da093897ad47ba9f1b0b61d5965fc5d06573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True)
Dec  3 16:09:54 np0005544708 systemd[1]: libpod-conmon-c78068a68f3816f6f13cb086f385da093897ad47ba9f1b0b61d5965fc5d06573.scope: Deactivated successfully.
Dec  3 16:09:54 np0005544708 podman[92999]: 2025-12-03 21:09:54.663103436 +0000 UTC m=+0.040579310 container create ac98c39d52de68d52f5492bdd2eb38394377e77d81831539ab0f877b896738cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Dec  3 16:09:54 np0005544708 systemd[1]: Started libpod-conmon-ac98c39d52de68d52f5492bdd2eb38394377e77d81831539ab0f877b896738cf.scope.
Dec  3 16:09:54 np0005544708 podman[92999]: 2025-12-03 21:09:54.644422135 +0000 UTC m=+0.021898049 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:09:54 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  3 16:09:54 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/978046922' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  3 16:09:54 np0005544708 crazy_cartwright[92925]: 
Dec  3 16:09:54 np0005544708 crazy_cartwright[92925]: {"epoch":1,"fsid":"c21de27e-a7fd-594b-8324-0697ba9aab3a","modified":"2025-12-03T21:07:57.000116Z","created":"2025-12-03T21:07:57.000116Z","min_mon_release":20,"min_mon_release_name":"tentacle","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid","tentacle"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Dec  3 16:09:54 np0005544708 crazy_cartwright[92925]: dumped monmap epoch 1
Dec  3 16:09:54 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:54 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff72b479d959da9c9768e73b58cb121d8c9d74bf9112d71493968e6fc03d29fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:54 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff72b479d959da9c9768e73b58cb121d8c9d74bf9112d71493968e6fc03d29fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:54 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff72b479d959da9c9768e73b58cb121d8c9d74bf9112d71493968e6fc03d29fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:54 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff72b479d959da9c9768e73b58cb121d8c9d74bf9112d71493968e6fc03d29fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:54 np0005544708 systemd[1]: libpod-b7bb3e47733486777c20fd915ee2b6b1ac127a8fe1325ac9726ef8abe89ac3cf.scope: Deactivated successfully.
Dec  3 16:09:54 np0005544708 podman[92907]: 2025-12-03 21:09:54.783911315 +0000 UTC m=+0.943494553 container died b7bb3e47733486777c20fd915ee2b6b1ac127a8fe1325ac9726ef8abe89ac3cf (image=quay.io/ceph/ceph:v20, name=crazy_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:09:54 np0005544708 podman[92999]: 2025-12-03 21:09:54.799381591 +0000 UTC m=+0.176857525 container init ac98c39d52de68d52f5492bdd2eb38394377e77d81831539ab0f877b896738cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mahavira, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 16:09:54 np0005544708 systemd[1]: var-lib-containers-storage-overlay-c4590d4f6be376aaa545584357fea44b6465f8802dffd4f7455f1e2d52eb6da3-merged.mount: Deactivated successfully.
Dec  3 16:09:54 np0005544708 podman[92999]: 2025-12-03 21:09:54.809632231 +0000 UTC m=+0.187108095 container start ac98c39d52de68d52f5492bdd2eb38394377e77d81831539ab0f877b896738cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec  3 16:09:54 np0005544708 podman[92999]: 2025-12-03 21:09:54.821315719 +0000 UTC m=+0.198791673 container attach ac98c39d52de68d52f5492bdd2eb38394377e77d81831539ab0f877b896738cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mahavira, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  3 16:09:54 np0005544708 podman[92907]: 2025-12-03 21:09:54.82674553 +0000 UTC m=+0.986328748 container remove b7bb3e47733486777c20fd915ee2b6b1ac127a8fe1325ac9726ef8abe89ac3cf (image=quay.io/ceph/ceph:v20, name=crazy_cartwright, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:09:54 np0005544708 systemd[1]: libpod-conmon-b7bb3e47733486777c20fd915ee2b6b1ac127a8fe1325ac9726ef8abe89ac3cf.scope: Deactivated successfully.
Dec  3 16:09:55 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v69: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:09:55 np0005544708 python3[93080]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid c21de27e-a7fd-594b-8324-0697ba9aab3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:09:55 np0005544708 podman[93114]: 2025-12-03 21:09:55.370729737 +0000 UTC m=+0.022348698 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:09:55 np0005544708 lvm[93139]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:09:55 np0005544708 lvm[93139]: VG ceph_vg0 finished
Dec  3 16:09:55 np0005544708 lvm[93142]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:09:55 np0005544708 lvm[93142]: VG ceph_vg1 finished
Dec  3 16:09:55 np0005544708 lvm[93144]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:09:55 np0005544708 lvm[93144]: VG ceph_vg2 finished
Dec  3 16:09:55 np0005544708 compassionate_mahavira[93015]: {}
Dec  3 16:09:55 np0005544708 systemd[1]: libpod-ac98c39d52de68d52f5492bdd2eb38394377e77d81831539ab0f877b896738cf.scope: Deactivated successfully.
Dec  3 16:09:55 np0005544708 systemd[1]: libpod-ac98c39d52de68d52f5492bdd2eb38394377e77d81831539ab0f877b896738cf.scope: Consumed 1.373s CPU time.
Dec  3 16:09:55 np0005544708 podman[93114]: 2025-12-03 21:09:55.672387882 +0000 UTC m=+0.324006803 container create a8648d6495de6ca6f7cebd944dc83911eb62d77acf1a84b8297c8a95325c24a4 (image=quay.io/ceph/ceph:v20, name=compassionate_lewin, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec  3 16:09:55 np0005544708 podman[92999]: 2025-12-03 21:09:55.672945163 +0000 UTC m=+1.050421067 container died ac98c39d52de68d52f5492bdd2eb38394377e77d81831539ab0f877b896738cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mahavira, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec  3 16:09:55 np0005544708 systemd[1]: Started libpod-conmon-a8648d6495de6ca6f7cebd944dc83911eb62d77acf1a84b8297c8a95325c24a4.scope.
Dec  3 16:09:55 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:55 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fba48e3eb6a66c8b2feba6ffe873a6c0ae95e449f3bd5bef73fdb57e253d5ad/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:55 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fba48e3eb6a66c8b2feba6ffe873a6c0ae95e449f3bd5bef73fdb57e253d5ad/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:55 np0005544708 systemd[1]: var-lib-containers-storage-overlay-ff72b479d959da9c9768e73b58cb121d8c9d74bf9112d71493968e6fc03d29fa-merged.mount: Deactivated successfully.
Dec  3 16:09:55 np0005544708 podman[93114]: 2025-12-03 21:09:55.793078959 +0000 UTC m=+0.444697880 container init a8648d6495de6ca6f7cebd944dc83911eb62d77acf1a84b8297c8a95325c24a4 (image=quay.io/ceph/ceph:v20, name=compassionate_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:09:55 np0005544708 podman[92999]: 2025-12-03 21:09:55.801833997 +0000 UTC m=+1.179309871 container remove ac98c39d52de68d52f5492bdd2eb38394377e77d81831539ab0f877b896738cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:09:55 np0005544708 podman[93114]: 2025-12-03 21:09:55.802227395 +0000 UTC m=+0.453846316 container start a8648d6495de6ca6f7cebd944dc83911eb62d77acf1a84b8297c8a95325c24a4 (image=quay.io/ceph/ceph:v20, name=compassionate_lewin, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:09:55 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:09:55 np0005544708 podman[93114]: 2025-12-03 21:09:55.974634829 +0000 UTC m=+0.626253790 container attach a8648d6495de6ca6f7cebd944dc83911eb62d77acf1a84b8297c8a95325c24a4 (image=quay.io/ceph/ceph:v20, name=compassionate_lewin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  3 16:09:55 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:55 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:09:55 np0005544708 systemd[1]: libpod-conmon-ac98c39d52de68d52f5492bdd2eb38394377e77d81831539ab0f877b896738cf.scope: Deactivated successfully.
Dec  3 16:09:56 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:56 np0005544708 ceph-mgr[75500]: [progress INFO root] update: starting ev 765805c2-a519-46f9-90b4-73dfda2b5520 (Updating mds.cephfs deployment (+1 -> 1))
Dec  3 16:09:56 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.gzkqle", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec  3 16:09:56 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.gzkqle", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec  3 16:09:56 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.gzkqle", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  3 16:09:56 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:09:56 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:09:56 np0005544708 ceph-mgr[75500]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.gzkqle on compute-0
Dec  3 16:09:56 np0005544708 ceph-mgr[75500]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.gzkqle on compute-0
Dec  3 16:09:56 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Dec  3 16:09:56 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3575152023' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Dec  3 16:09:56 np0005544708 compassionate_lewin[93164]: [client.openstack]
Dec  3 16:09:56 np0005544708 compassionate_lewin[93164]: #011key = AQB5pjBpAAAAABAAKWIHAEu4Fcpg9BW4WoYnAg==
Dec  3 16:09:56 np0005544708 compassionate_lewin[93164]: #011caps mgr = "allow *"
Dec  3 16:09:56 np0005544708 compassionate_lewin[93164]: #011caps mon = "profile rbd"
Dec  3 16:09:56 np0005544708 compassionate_lewin[93164]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Dec  3 16:09:56 np0005544708 systemd[1]: libpod-a8648d6495de6ca6f7cebd944dc83911eb62d77acf1a84b8297c8a95325c24a4.scope: Deactivated successfully.
Dec  3 16:09:56 np0005544708 conmon[93164]: conmon a8648d6495de6ca6f7ce <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a8648d6495de6ca6f7cebd944dc83911eb62d77acf1a84b8297c8a95325c24a4.scope/container/memory.events
Dec  3 16:09:56 np0005544708 podman[93114]: 2025-12-03 21:09:56.475632217 +0000 UTC m=+1.127251138 container died a8648d6495de6ca6f7cebd944dc83911eb62d77acf1a84b8297c8a95325c24a4 (image=quay.io/ceph/ceph:v20, name=compassionate_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:09:56 np0005544708 systemd[1]: var-lib-containers-storage-overlay-6fba48e3eb6a66c8b2feba6ffe873a6c0ae95e449f3bd5bef73fdb57e253d5ad-merged.mount: Deactivated successfully.
Dec  3 16:09:56 np0005544708 podman[93114]: 2025-12-03 21:09:56.570996266 +0000 UTC m=+1.222615237 container remove a8648d6495de6ca6f7cebd944dc83911eb62d77acf1a84b8297c8a95325c24a4 (image=quay.io/ceph/ceph:v20, name=compassionate_lewin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:09:56 np0005544708 systemd[1]: libpod-conmon-a8648d6495de6ca6f7cebd944dc83911eb62d77acf1a84b8297c8a95325c24a4.scope: Deactivated successfully.
Dec  3 16:09:56 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:56 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:56 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.gzkqle", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec  3 16:09:56 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.gzkqle", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  3 16:09:56 np0005544708 ceph-mon[75204]: Deploying daemon mds.cephfs.compute-0.gzkqle on compute-0
Dec  3 16:09:56 np0005544708 ceph-mon[75204]: from='client.? 192.168.122.100:0/3575152023' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Dec  3 16:09:56 np0005544708 podman[93289]: 2025-12-03 21:09:56.686384734 +0000 UTC m=+0.047334098 container create ed820a545b2b0787f00c74a6f79762264f1bf919c8e31757a9e44c2fc7016dc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_cannon, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:09:56 np0005544708 systemd[1]: Started libpod-conmon-ed820a545b2b0787f00c74a6f79762264f1bf919c8e31757a9e44c2fc7016dc3.scope.
Dec  3 16:09:56 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:09:56 np0005544708 podman[93289]: 2025-12-03 21:09:56.666307164 +0000 UTC m=+0.027256588 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:09:56 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:56 np0005544708 podman[93289]: 2025-12-03 21:09:56.773533245 +0000 UTC m=+0.134482609 container init ed820a545b2b0787f00c74a6f79762264f1bf919c8e31757a9e44c2fc7016dc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_cannon, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:09:56 np0005544708 podman[93289]: 2025-12-03 21:09:56.779674601 +0000 UTC m=+0.140623965 container start ed820a545b2b0787f00c74a6f79762264f1bf919c8e31757a9e44c2fc7016dc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec  3 16:09:56 np0005544708 nervous_cannon[93306]: 167 167
Dec  3 16:09:56 np0005544708 podman[93289]: 2025-12-03 21:09:56.783045919 +0000 UTC m=+0.143995283 container attach ed820a545b2b0787f00c74a6f79762264f1bf919c8e31757a9e44c2fc7016dc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  3 16:09:56 np0005544708 systemd[1]: libpod-ed820a545b2b0787f00c74a6f79762264f1bf919c8e31757a9e44c2fc7016dc3.scope: Deactivated successfully.
Dec  3 16:09:56 np0005544708 podman[93289]: 2025-12-03 21:09:56.783975039 +0000 UTC m=+0.144924403 container died ed820a545b2b0787f00c74a6f79762264f1bf919c8e31757a9e44c2fc7016dc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec  3 16:09:56 np0005544708 systemd[1]: var-lib-containers-storage-overlay-6774216d207ff941eafcda8fa50a5f654e1faf0da3794bf55801f2233ca8a42a-merged.mount: Deactivated successfully.
Dec  3 16:09:56 np0005544708 podman[93289]: 2025-12-03 21:09:56.819996214 +0000 UTC m=+0.180945578 container remove ed820a545b2b0787f00c74a6f79762264f1bf919c8e31757a9e44c2fc7016dc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:09:56 np0005544708 systemd[1]: libpod-conmon-ed820a545b2b0787f00c74a6f79762264f1bf919c8e31757a9e44c2fc7016dc3.scope: Deactivated successfully.
Dec  3 16:09:56 np0005544708 systemd[1]: Reloading.
Dec  3 16:09:56 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:09:56 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:09:57 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:09:57 np0005544708 systemd[1]: Reloading.
Dec  3 16:09:57 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:09:57 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:09:57 np0005544708 systemd[1]: Starting Ceph mds.cephfs.compute-0.gzkqle for c21de27e-a7fd-594b-8324-0697ba9aab3a...
Dec  3 16:09:57 np0005544708 podman[93510]: 2025-12-03 21:09:57.870640875 +0000 UTC m=+0.057503735 container create 696a375e6a5a7e1fe4808c2104075d852001912137351307176607b220314d9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mds-cephfs-compute-0-gzkqle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 16:09:57 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1370250e29db15f8546b911b832718ee75670b93b2501a1ab31cb7db25291d5c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:57 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1370250e29db15f8546b911b832718ee75670b93b2501a1ab31cb7db25291d5c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:57 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1370250e29db15f8546b911b832718ee75670b93b2501a1ab31cb7db25291d5c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:57 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1370250e29db15f8546b911b832718ee75670b93b2501a1ab31cb7db25291d5c/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.gzkqle supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:57 np0005544708 podman[93510]: 2025-12-03 21:09:57.846728217 +0000 UTC m=+0.033591127 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:09:57 np0005544708 podman[93510]: 2025-12-03 21:09:57.954757054 +0000 UTC m=+0.141619954 container init 696a375e6a5a7e1fe4808c2104075d852001912137351307176607b220314d9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mds-cephfs-compute-0-gzkqle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 16:09:57 np0005544708 podman[93510]: 2025-12-03 21:09:57.970636639 +0000 UTC m=+0.157499519 container start 696a375e6a5a7e1fe4808c2104075d852001912137351307176607b220314d9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mds-cephfs-compute-0-gzkqle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:09:57 np0005544708 bash[93510]: 696a375e6a5a7e1fe4808c2104075d852001912137351307176607b220314d9c
Dec  3 16:09:57 np0005544708 systemd[1]: Started Ceph mds.cephfs.compute-0.gzkqle for c21de27e-a7fd-594b-8324-0697ba9aab3a.
Dec  3 16:09:58 np0005544708 ceph-mds[93586]: set uid:gid to 167:167 (ceph:ceph)
Dec  3 16:09:58 np0005544708 ceph-mds[93586]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mds, pid 2
Dec  3 16:09:58 np0005544708 ceph-mds[93586]: main not setting numa affinity
Dec  3 16:09:58 np0005544708 ceph-mds[93586]: pidfile_write: ignore empty --pid-file
Dec  3 16:09:58 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mds-cephfs-compute-0-gzkqle[93561]: starting mds.cephfs.compute-0.gzkqle at 
Dec  3 16:09:58 np0005544708 ceph-mds[93586]: mds.cephfs.compute-0.gzkqle Updating MDS map to version 2 from mon.0
Dec  3 16:09:58 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:09:58 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:58 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:09:58 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:58 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec  3 16:09:58 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:58 np0005544708 ceph-mgr[75500]: [progress INFO root] complete: finished ev 765805c2-a519-46f9-90b4-73dfda2b5520 (Updating mds.cephfs deployment (+1 -> 1))
Dec  3 16:09:58 np0005544708 ceph-mgr[75500]: [progress INFO root] Completed event 765805c2-a519-46f9-90b4-73dfda2b5520 (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Dec  3 16:09:58 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Dec  3 16:09:58 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:58 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec  3 16:09:58 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:58 np0005544708 ansible-async_wrapper.py[93635]: Invoked with j892459367166 30 /home/zuul/.ansible/tmp/ansible-tmp-1764796197.6702933-36878-224517071162573/AnsiballZ_command.py _
Dec  3 16:09:58 np0005544708 ansible-async_wrapper.py[93693]: Starting module and watcher
Dec  3 16:09:58 np0005544708 ansible-async_wrapper.py[93693]: Start watching 93695 (30)
Dec  3 16:09:58 np0005544708 ansible-async_wrapper.py[93695]: Start module (93695)
Dec  3 16:09:58 np0005544708 ansible-async_wrapper.py[93635]: Return async_wrapper task started.
Dec  3 16:09:58 np0005544708 python3[93699]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid c21de27e-a7fd-594b-8324-0697ba9aab3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:09:58 np0005544708 podman[93716]: 2025-12-03 21:09:58.515058425 +0000 UTC m=+0.079018676 container create dcfe9b2c5f286e305b4cdc3fa215ad4f3372a547a227fcf155804e9034d6e001 (image=quay.io/ceph/ceph:v20, name=goofy_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec  3 16:09:58 np0005544708 systemd[1]: Started libpod-conmon-dcfe9b2c5f286e305b4cdc3fa215ad4f3372a547a227fcf155804e9034d6e001.scope.
Dec  3 16:09:58 np0005544708 podman[93716]: 2025-12-03 21:09:58.478500818 +0000 UTC m=+0.042461129 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:09:58 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:09:58 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ab1ef4295c2fbcba9f6de81d10d2fb9312b853db1506f75006916f692ba934e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:58 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ab1ef4295c2fbcba9f6de81d10d2fb9312b853db1506f75006916f692ba934e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:09:58 np0005544708 podman[93716]: 2025-12-03 21:09:58.636379465 +0000 UTC m=+0.200339796 container init dcfe9b2c5f286e305b4cdc3fa215ad4f3372a547a227fcf155804e9034d6e001 (image=quay.io/ceph/ceph:v20, name=goofy_babbage, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Dec  3 16:09:58 np0005544708 podman[93716]: 2025-12-03 21:09:58.649514242 +0000 UTC m=+0.213474493 container start dcfe9b2c5f286e305b4cdc3fa215ad4f3372a547a227fcf155804e9034d6e001 (image=quay.io/ceph/ceph:v20, name=goofy_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:09:58 np0005544708 podman[93716]: 2025-12-03 21:09:58.653876432 +0000 UTC m=+0.217836673 container attach dcfe9b2c5f286e305b4cdc3fa215ad4f3372a547a227fcf155804e9034d6e001 (image=quay.io/ceph/ceph:v20, name=goofy_babbage, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:09:58 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).mds e3 new map
Dec  3 16:09:58 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).mds e3 print_map#012e3#012btime 2025-12-03T21:09:58:698664+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-03T21:09:47.860950+0000#012modified#0112025-12-03T21:09:47.860950+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.gzkqle{-1:14242} state up:standby seq 1 addr [v2:192.168.122.100:6814/3914722781,v1:192.168.122.100:6815/3914722781] compat {c=[1],r=[1],i=[1fff]}]
Dec  3 16:09:58 np0005544708 ceph-mds[93586]: mds.cephfs.compute-0.gzkqle Updating MDS map to version 3 from mon.0
Dec  3 16:09:58 np0005544708 ceph-mds[93586]: mds.cephfs.compute-0.gzkqle Monitors have assigned me to become a standby
Dec  3 16:09:58 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/3914722781,v1:192.168.122.100:6815/3914722781] up:boot
Dec  3 16:09:58 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/3914722781,v1:192.168.122.100:6815/3914722781] as mds.0
Dec  3 16:09:58 np0005544708 ceph-mon[75204]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.gzkqle assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec  3 16:09:58 np0005544708 ceph-mon[75204]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec  3 16:09:58 np0005544708 ceph-mon[75204]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec  3 16:09:58 np0005544708 ceph-mon[75204]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec  3 16:09:58 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Dec  3 16:09:58 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.gzkqle"} v 0)
Dec  3 16:09:58 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.gzkqle"} : dispatch
Dec  3 16:09:58 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).mds e3 all = 0
Dec  3 16:09:58 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).mds e4 new map
Dec  3 16:09:58 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).mds e4 print_map#012e4#012btime 2025-12-03T21:09:58:707549+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-03T21:09:47.860950+0000#012modified#0112025-12-03T21:09:58.707539+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14242}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-0.gzkqle{0:14242} state up:creating seq 1 addr [v2:192.168.122.100:6814/3914722781,v1:192.168.122.100:6815/3914722781] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Dec  3 16:09:58 np0005544708 ceph-mds[93586]: mds.cephfs.compute-0.gzkqle Updating MDS map to version 4 from mon.0
Dec  3 16:09:58 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.gzkqle=up:creating}
Dec  3 16:09:58 np0005544708 ceph-mds[93586]: mds.0.4 handle_mds_map I am now mds.0.4
Dec  3 16:09:58 np0005544708 ceph-mds[93586]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Dec  3 16:09:58 np0005544708 ceph-mds[93586]: mds.0.cache creating system inode with ino:0x1
Dec  3 16:09:58 np0005544708 ceph-mds[93586]: mds.0.cache creating system inode with ino:0x100
Dec  3 16:09:58 np0005544708 ceph-mds[93586]: mds.0.cache creating system inode with ino:0x600
Dec  3 16:09:58 np0005544708 ceph-mds[93586]: mds.0.cache creating system inode with ino:0x601
Dec  3 16:09:58 np0005544708 ceph-mds[93586]: mds.0.cache creating system inode with ino:0x602
Dec  3 16:09:58 np0005544708 ceph-mds[93586]: mds.0.cache creating system inode with ino:0x603
Dec  3 16:09:58 np0005544708 ceph-mds[93586]: mds.0.cache creating system inode with ino:0x604
Dec  3 16:09:58 np0005544708 ceph-mds[93586]: mds.0.cache creating system inode with ino:0x605
Dec  3 16:09:58 np0005544708 ceph-mds[93586]: mds.0.cache creating system inode with ino:0x606
Dec  3 16:09:58 np0005544708 ceph-mds[93586]: mds.0.cache creating system inode with ino:0x607
Dec  3 16:09:58 np0005544708 ceph-mds[93586]: mds.0.cache creating system inode with ino:0x608
Dec  3 16:09:58 np0005544708 ceph-mds[93586]: mds.0.cache creating system inode with ino:0x609
Dec  3 16:09:58 np0005544708 ceph-mds[93586]: mds.0.4 creating_done
Dec  3 16:09:58 np0005544708 ceph-mon[75204]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.gzkqle is now active in filesystem cephfs as rank 0
Dec  3 16:09:58 np0005544708 podman[93784]: 2025-12-03 21:09:58.844910516 +0000 UTC m=+0.073697307 container exec 5be1cf87f4450717250955c63021ef41dba8c1099adb634f3e1f6a0b130fa916 (image=quay.io/ceph/ceph:v20, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mon-compute-0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:09:58 np0005544708 podman[93784]: 2025-12-03 21:09:58.99631774 +0000 UTC m=+0.225104501 container exec_died 5be1cf87f4450717250955c63021ef41dba8c1099adb634f3e1f6a0b130fa916 (image=quay.io/ceph/ceph:v20, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mon-compute-0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec  3 16:09:59 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:59 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:59 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:59 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:59 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:59 np0005544708 ceph-mon[75204]: daemon mds.cephfs.compute-0.gzkqle assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec  3 16:09:59 np0005544708 ceph-mon[75204]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec  3 16:09:59 np0005544708 ceph-mon[75204]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec  3 16:09:59 np0005544708 ceph-mon[75204]: Cluster is now healthy
Dec  3 16:09:59 np0005544708 ceph-mon[75204]: daemon mds.cephfs.compute-0.gzkqle is now active in filesystem cephfs as rank 0
Dec  3 16:09:59 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  3 16:09:59 np0005544708 goofy_babbage[93744]: 
Dec  3 16:09:59 np0005544708 goofy_babbage[93744]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  3 16:09:59 np0005544708 systemd[1]: libpod-dcfe9b2c5f286e305b4cdc3fa215ad4f3372a547a227fcf155804e9034d6e001.scope: Deactivated successfully.
Dec  3 16:09:59 np0005544708 podman[93716]: 2025-12-03 21:09:59.110465943 +0000 UTC m=+0.674426174 container died dcfe9b2c5f286e305b4cdc3fa215ad4f3372a547a227fcf155804e9034d6e001 (image=quay.io/ceph/ceph:v20, name=goofy_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:09:59 np0005544708 systemd[1]: var-lib-containers-storage-overlay-7ab1ef4295c2fbcba9f6de81d10d2fb9312b853db1506f75006916f692ba934e-merged.mount: Deactivated successfully.
Dec  3 16:09:59 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:09:59 np0005544708 podman[93716]: 2025-12-03 21:09:59.153162195 +0000 UTC m=+0.717122426 container remove dcfe9b2c5f286e305b4cdc3fa215ad4f3372a547a227fcf155804e9034d6e001 (image=quay.io/ceph/ceph:v20, name=goofy_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec  3 16:09:59 np0005544708 systemd[1]: libpod-conmon-dcfe9b2c5f286e305b4cdc3fa215ad4f3372a547a227fcf155804e9034d6e001.scope: Deactivated successfully.
Dec  3 16:09:59 np0005544708 ansible-async_wrapper.py[93695]: Module complete (93695)
Dec  3 16:09:59 np0005544708 python3[93997]: ansible-ansible.legacy.async_status Invoked with jid=j892459367166.93635 mode=status _async_dir=/root/.ansible_async
Dec  3 16:09:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:09:59 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:09:59 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:09:59 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:09:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  3 16:09:59 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:09:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  3 16:09:59 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:09:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  3 16:09:59 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  3 16:09:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  3 16:09:59 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:09:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:09:59 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:09:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).mds e5 new map
Dec  3 16:09:59 np0005544708 ceph-mds[93586]: mds.cephfs.compute-0.gzkqle Updating MDS map to version 5 from mon.0
Dec  3 16:09:59 np0005544708 ceph-mds[93586]: mds.0.4 handle_mds_map I am now mds.0.4
Dec  3 16:09:59 np0005544708 ceph-mds[93586]: mds.0.4 handle_mds_map state change up:creating --> up:active
Dec  3 16:09:59 np0005544708 ceph-mds[93586]: mds.0.4 recovery_done -- successful recovery!
Dec  3 16:09:59 np0005544708 ceph-mds[93586]: mds.0.4 active_start
Dec  3 16:09:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).mds e5 print_map#012e5#012btime 2025-12-03T21:09:59:713283+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-03T21:09:47.860950+0000#012modified#0112025-12-03T21:09:59.713281+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14242}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 14242 members: 14242#012[mds.cephfs.compute-0.gzkqle{0:14242} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/3914722781,v1:192.168.122.100:6815/3914722781] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Dec  3 16:09:59 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/3914722781,v1:192.168.122.100:6815/3914722781] up:active
Dec  3 16:09:59 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.gzkqle=up:active}
Dec  3 16:09:59 np0005544708 python3[94090]: ansible-ansible.legacy.async_status Invoked with jid=j892459367166.93635 mode=cleanup _async_dir=/root/.ansible_async
Dec  3 16:10:00 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:10:00 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:10:00 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:10:00 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:10:00 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:10:00 np0005544708 podman[94149]: 2025-12-03 21:10:00.113012201 +0000 UTC m=+0.061831364 container create f122a5cc9b1a0eba0e2a8cdc3af9db8e49087dc154ca01d289c1baadcc596a51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_moore, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec  3 16:10:00 np0005544708 systemd[1]: Started libpod-conmon-f122a5cc9b1a0eba0e2a8cdc3af9db8e49087dc154ca01d289c1baadcc596a51.scope.
Dec  3 16:10:00 np0005544708 podman[94149]: 2025-12-03 21:10:00.08848124 +0000 UTC m=+0.037300393 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:10:00 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:10:00 np0005544708 podman[94149]: 2025-12-03 21:10:00.214496215 +0000 UTC m=+0.163315428 container init f122a5cc9b1a0eba0e2a8cdc3af9db8e49087dc154ca01d289c1baadcc596a51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_moore, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  3 16:10:00 np0005544708 podman[94149]: 2025-12-03 21:10:00.22794072 +0000 UTC m=+0.176759883 container start f122a5cc9b1a0eba0e2a8cdc3af9db8e49087dc154ca01d289c1baadcc596a51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_moore, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 16:10:00 np0005544708 podman[94149]: 2025-12-03 21:10:00.233464863 +0000 UTC m=+0.182299676 container attach f122a5cc9b1a0eba0e2a8cdc3af9db8e49087dc154ca01d289c1baadcc596a51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_moore, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:10:00 np0005544708 serene_moore[94166]: 167 167
Dec  3 16:10:00 np0005544708 systemd[1]: libpod-f122a5cc9b1a0eba0e2a8cdc3af9db8e49087dc154ca01d289c1baadcc596a51.scope: Deactivated successfully.
Dec  3 16:10:00 np0005544708 podman[94149]: 2025-12-03 21:10:00.235832642 +0000 UTC m=+0.184651825 container died f122a5cc9b1a0eba0e2a8cdc3af9db8e49087dc154ca01d289c1baadcc596a51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_moore, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Dec  3 16:10:00 np0005544708 systemd[1]: var-lib-containers-storage-overlay-22de79ed5dffd72224bca820dce6861ff6dc566b38cbce86276cbe23fe023983-merged.mount: Deactivated successfully.
Dec  3 16:10:00 np0005544708 podman[94149]: 2025-12-03 21:10:00.280901553 +0000 UTC m=+0.229720686 container remove f122a5cc9b1a0eba0e2a8cdc3af9db8e49087dc154ca01d289c1baadcc596a51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_moore, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  3 16:10:00 np0005544708 systemd[1]: libpod-conmon-f122a5cc9b1a0eba0e2a8cdc3af9db8e49087dc154ca01d289c1baadcc596a51.scope: Deactivated successfully.
Dec  3 16:10:00 np0005544708 python3[94207]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid c21de27e-a7fd-594b-8324-0697ba9aab3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:10:00 np0005544708 podman[94214]: 2025-12-03 21:10:00.44598991 +0000 UTC m=+0.050962863 container create 10a6be72d412f3b30050c4f0912ed4847ce376d6bce7f349e14a59fb67da0045 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_ellis, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec  3 16:10:00 np0005544708 podman[94227]: 2025-12-03 21:10:00.484845659 +0000 UTC m=+0.056081080 container create aedd04ec8cb1c1c1f47543c232853453dbb6db4e60899a631c10a7331376412d (image=quay.io/ceph/ceph:v20, name=elegant_satoshi, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:10:00 np0005544708 systemd[1]: Started libpod-conmon-10a6be72d412f3b30050c4f0912ed4847ce376d6bce7f349e14a59fb67da0045.scope.
Dec  3 16:10:00 np0005544708 podman[94214]: 2025-12-03 21:10:00.425464931 +0000 UTC m=+0.030437904 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:10:00 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:10:00 np0005544708 systemd[1]: Started libpod-conmon-aedd04ec8cb1c1c1f47543c232853453dbb6db4e60899a631c10a7331376412d.scope.
Dec  3 16:10:00 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bebfa8646a815e8c7fe4004da4bd61e2904a81607cda449252b516ff7a88ad4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:10:00 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bebfa8646a815e8c7fe4004da4bd61e2904a81607cda449252b516ff7a88ad4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:10:00 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bebfa8646a815e8c7fe4004da4bd61e2904a81607cda449252b516ff7a88ad4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:10:00 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bebfa8646a815e8c7fe4004da4bd61e2904a81607cda449252b516ff7a88ad4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:10:00 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bebfa8646a815e8c7fe4004da4bd61e2904a81607cda449252b516ff7a88ad4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:10:00 np0005544708 podman[94214]: 2025-12-03 21:10:00.551327074 +0000 UTC m=+0.156300047 container init 10a6be72d412f3b30050c4f0912ed4847ce376d6bce7f349e14a59fb67da0045 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:10:00 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:10:00 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc23dea4cd64f6a6781804282442521867f65e4c194a91a6a7f2d1ee152cc6a4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:10:00 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc23dea4cd64f6a6781804282442521867f65e4c194a91a6a7f2d1ee152cc6a4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:10:00 np0005544708 podman[94214]: 2025-12-03 21:10:00.559037821 +0000 UTC m=+0.164010774 container start 10a6be72d412f3b30050c4f0912ed4847ce376d6bce7f349e14a59fb67da0045 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_ellis, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec  3 16:10:00 np0005544708 podman[94227]: 2025-12-03 21:10:00.466669692 +0000 UTC m=+0.037905123 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:10:00 np0005544708 podman[94214]: 2025-12-03 21:10:00.563005547 +0000 UTC m=+0.167978510 container attach 10a6be72d412f3b30050c4f0912ed4847ce376d6bce7f349e14a59fb67da0045 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:10:00 np0005544708 podman[94227]: 2025-12-03 21:10:00.57809273 +0000 UTC m=+0.149328191 container init aedd04ec8cb1c1c1f47543c232853453dbb6db4e60899a631c10a7331376412d (image=quay.io/ceph/ceph:v20, name=elegant_satoshi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 16:10:00 np0005544708 podman[94227]: 2025-12-03 21:10:00.593038759 +0000 UTC m=+0.164274180 container start aedd04ec8cb1c1c1f47543c232853453dbb6db4e60899a631c10a7331376412d (image=quay.io/ceph/ceph:v20, name=elegant_satoshi, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 16:10:00 np0005544708 podman[94227]: 2025-12-03 21:10:00.596151022 +0000 UTC m=+0.167386443 container attach aedd04ec8cb1c1c1f47543c232853453dbb6db4e60899a631c10a7331376412d (image=quay.io/ceph/ceph:v20, name=elegant_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 16:10:01 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14246 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  3 16:10:01 np0005544708 elegant_satoshi[94249]: 
Dec  3 16:10:01 np0005544708 elegant_satoshi[94249]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  3 16:10:01 np0005544708 systemd[1]: libpod-aedd04ec8cb1c1c1f47543c232853453dbb6db4e60899a631c10a7331376412d.scope: Deactivated successfully.
Dec  3 16:10:01 np0005544708 podman[94227]: 2025-12-03 21:10:01.108819893 +0000 UTC m=+0.680055304 container died aedd04ec8cb1c1c1f47543c232853453dbb6db4e60899a631c10a7331376412d (image=quay.io/ceph/ceph:v20, name=elegant_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:10:01 np0005544708 systemd[1]: var-lib-containers-storage-overlay-fc23dea4cd64f6a6781804282442521867f65e4c194a91a6a7f2d1ee152cc6a4-merged.mount: Deactivated successfully.
Dec  3 16:10:01 np0005544708 sad_ellis[94241]: --> passed data devices: 0 physical, 3 LVM
Dec  3 16:10:01 np0005544708 sad_ellis[94241]: --> All data devices are unavailable
Dec  3 16:10:01 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v72: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Dec  3 16:10:01 np0005544708 podman[94227]: 2025-12-03 21:10:01.152140231 +0000 UTC m=+0.723375652 container remove aedd04ec8cb1c1c1f47543c232853453dbb6db4e60899a631c10a7331376412d (image=quay.io/ceph/ceph:v20, name=elegant_satoshi, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:10:01 np0005544708 systemd[1]: libpod-conmon-aedd04ec8cb1c1c1f47543c232853453dbb6db4e60899a631c10a7331376412d.scope: Deactivated successfully.
Dec  3 16:10:01 np0005544708 systemd[1]: libpod-10a6be72d412f3b30050c4f0912ed4847ce376d6bce7f349e14a59fb67da0045.scope: Deactivated successfully.
Dec  3 16:10:01 np0005544708 podman[94214]: 2025-12-03 21:10:01.180981791 +0000 UTC m=+0.785954764 container died 10a6be72d412f3b30050c4f0912ed4847ce376d6bce7f349e14a59fb67da0045 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  3 16:10:01 np0005544708 systemd[1]: var-lib-containers-storage-overlay-4bebfa8646a815e8c7fe4004da4bd61e2904a81607cda449252b516ff7a88ad4-merged.mount: Deactivated successfully.
Dec  3 16:10:01 np0005544708 podman[94214]: 2025-12-03 21:10:01.23030035 +0000 UTC m=+0.835273303 container remove 10a6be72d412f3b30050c4f0912ed4847ce376d6bce7f349e14a59fb67da0045 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  3 16:10:01 np0005544708 systemd[1]: libpod-conmon-10a6be72d412f3b30050c4f0912ed4847ce376d6bce7f349e14a59fb67da0045.scope: Deactivated successfully.
Dec  3 16:10:01 np0005544708 podman[94377]: 2025-12-03 21:10:01.680236113 +0000 UTC m=+0.041580262 container create 4405fba53c416947f1ad61c93aa74182911dda92bdc69977072ceae788127d5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec  3 16:10:01 np0005544708 systemd[1]: Started libpod-conmon-4405fba53c416947f1ad61c93aa74182911dda92bdc69977072ceae788127d5b.scope.
Dec  3 16:10:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:10:01 np0005544708 podman[94377]: 2025-12-03 21:10:01.661945545 +0000 UTC m=+0.023289714 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:10:01 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:10:01 np0005544708 podman[94377]: 2025-12-03 21:10:01.777125573 +0000 UTC m=+0.138469772 container init 4405fba53c416947f1ad61c93aa74182911dda92bdc69977072ceae788127d5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:10:01 np0005544708 ceph-mgr[75500]: [progress INFO root] Writing back 4 completed events
Dec  3 16:10:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  3 16:10:01 np0005544708 podman[94377]: 2025-12-03 21:10:01.789064582 +0000 UTC m=+0.150408731 container start 4405fba53c416947f1ad61c93aa74182911dda92bdc69977072ceae788127d5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:10:01 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:10:01 np0005544708 podman[94377]: 2025-12-03 21:10:01.792810542 +0000 UTC m=+0.154154781 container attach 4405fba53c416947f1ad61c93aa74182911dda92bdc69977072ceae788127d5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:10:01 np0005544708 eager_mclean[94393]: 167 167
Dec  3 16:10:01 np0005544708 systemd[1]: libpod-4405fba53c416947f1ad61c93aa74182911dda92bdc69977072ceae788127d5b.scope: Deactivated successfully.
Dec  3 16:10:01 np0005544708 podman[94377]: 2025-12-03 21:10:01.794672431 +0000 UTC m=+0.156016640 container died 4405fba53c416947f1ad61c93aa74182911dda92bdc69977072ceae788127d5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_mclean, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:10:01 np0005544708 systemd[1]: var-lib-containers-storage-overlay-f406ae5b18f923d2bf6149a15c19ec26d219b5e6b06468e3d1e569fb66376a2b-merged.mount: Deactivated successfully.
Dec  3 16:10:01 np0005544708 podman[94377]: 2025-12-03 21:10:01.855758174 +0000 UTC m=+0.217102333 container remove 4405fba53c416947f1ad61c93aa74182911dda92bdc69977072ceae788127d5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_mclean, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:10:01 np0005544708 systemd[1]: libpod-conmon-4405fba53c416947f1ad61c93aa74182911dda92bdc69977072ceae788127d5b.scope: Deactivated successfully.
Dec  3 16:10:02 np0005544708 podman[94445]: 2025-12-03 21:10:02.022896541 +0000 UTC m=+0.049853454 container create 82ec1722127ae299a52d45151adf2a5dc205db6c28e92bbee53decb4cb67ff3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_volhard, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:10:02 np0005544708 python3[94439]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid c21de27e-a7fd-594b-8324-0697ba9aab3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:10:02 np0005544708 systemd[1]: Started libpod-conmon-82ec1722127ae299a52d45151adf2a5dc205db6c28e92bbee53decb4cb67ff3d.scope.
Dec  3 16:10:02 np0005544708 podman[94445]: 2025-12-03 21:10:02.001253002 +0000 UTC m=+0.028209915 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:10:02 np0005544708 podman[94459]: 2025-12-03 21:10:02.098189893 +0000 UTC m=+0.047212693 container create c3e064ab20c10af36be7c7d6cbdf22a6e7a22913a43e641d957ad1f2565e51db (image=quay.io/ceph/ceph:v20, name=stupefied_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:10:02 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:10:02 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7b01352322ed5d62984b3c896d782745067f29827a591abbf145e30e9d4dca3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:10:02 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7b01352322ed5d62984b3c896d782745067f29827a591abbf145e30e9d4dca3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:10:02 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7b01352322ed5d62984b3c896d782745067f29827a591abbf145e30e9d4dca3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:10:02 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7b01352322ed5d62984b3c896d782745067f29827a591abbf145e30e9d4dca3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:10:02 np0005544708 podman[94445]: 2025-12-03 21:10:02.11940689 +0000 UTC m=+0.146363833 container init 82ec1722127ae299a52d45151adf2a5dc205db6c28e92bbee53decb4cb67ff3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_volhard, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec  3 16:10:02 np0005544708 systemd[1]: Started libpod-conmon-c3e064ab20c10af36be7c7d6cbdf22a6e7a22913a43e641d957ad1f2565e51db.scope.
Dec  3 16:10:02 np0005544708 podman[94445]: 2025-12-03 21:10:02.127203158 +0000 UTC m=+0.154160051 container start 82ec1722127ae299a52d45151adf2a5dc205db6c28e92bbee53decb4cb67ff3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_volhard, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec  3 16:10:02 np0005544708 podman[94445]: 2025-12-03 21:10:02.131955255 +0000 UTC m=+0.158912168 container attach 82ec1722127ae299a52d45151adf2a5dc205db6c28e92bbee53decb4cb67ff3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 16:10:02 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:10:02 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a50702146cb93b9c0787f85c5ec1692ca8685497eba0979dff255f833729f92c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:10:02 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a50702146cb93b9c0787f85c5ec1692ca8685497eba0979dff255f833729f92c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:10:02 np0005544708 podman[94459]: 2025-12-03 21:10:02.168390969 +0000 UTC m=+0.117413859 container init c3e064ab20c10af36be7c7d6cbdf22a6e7a22913a43e641d957ad1f2565e51db (image=quay.io/ceph/ceph:v20, name=stupefied_poincare, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec  3 16:10:02 np0005544708 podman[94459]: 2025-12-03 21:10:02.07375961 +0000 UTC m=+0.022782440 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:10:02 np0005544708 podman[94459]: 2025-12-03 21:10:02.177607105 +0000 UTC m=+0.126629945 container start c3e064ab20c10af36be7c7d6cbdf22a6e7a22913a43e641d957ad1f2565e51db (image=quay.io/ceph/ceph:v20, name=stupefied_poincare, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:10:02 np0005544708 podman[94459]: 2025-12-03 21:10:02.182095084 +0000 UTC m=+0.131117974 container attach c3e064ab20c10af36be7c7d6cbdf22a6e7a22913a43e641d957ad1f2565e51db (image=quay.io/ceph/ceph:v20, name=stupefied_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec  3 16:10:02 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]: {
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:    "0": [
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:        {
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:            "devices": [
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "/dev/loop3"
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:            ],
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:            "lv_name": "ceph_lv0",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:            "lv_size": "21470642176",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d33bc95-baf8-481d-bc78-3b15ffd29872,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:            "lv_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:            "name": "ceph_lv0",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:            "tags": {
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.block_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.cluster_name": "ceph",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.crush_device_class": "",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.encrypted": "0",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.objectstore": "bluestore",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.osd_fsid": "4d33bc95-baf8-481d-bc78-3b15ffd29872",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.osd_id": "0",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.type": "block",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.vdo": "0",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.with_tpm": "0"
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:            },
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:            "type": "block",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:            "vg_name": "ceph_vg0"
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:        }
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:    ],
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:    "1": [
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:        {
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:            "devices": [
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "/dev/loop4"
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:            ],
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:            "lv_name": "ceph_lv1",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:            "lv_size": "21470642176",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c4086f1b-ff53-4e63-8dc0-011238d77976,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:            "lv_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:            "name": "ceph_lv1",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:            "tags": {
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.block_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.cluster_name": "ceph",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.crush_device_class": "",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.encrypted": "0",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.objectstore": "bluestore",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.osd_fsid": "c4086f1b-ff53-4e63-8dc0-011238d77976",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.osd_id": "1",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.type": "block",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.vdo": "0",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.with_tpm": "0"
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:            },
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:            "type": "block",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:            "vg_name": "ceph_vg1"
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:        }
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:    ],
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:    "2": [
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:        {
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:            "devices": [
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "/dev/loop5"
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:            ],
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:            "lv_name": "ceph_lv2",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:            "lv_size": "21470642176",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=abcd6a67-9013-4470-978f-f75da5f33cd4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:            "lv_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:            "name": "ceph_lv2",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:            "tags": {
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.block_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.cluster_name": "ceph",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.crush_device_class": "",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.encrypted": "0",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.objectstore": "bluestore",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.osd_fsid": "abcd6a67-9013-4470-978f-f75da5f33cd4",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.osd_id": "2",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.type": "block",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.vdo": "0",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:                "ceph.with_tpm": "0"
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:            },
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:            "type": "block",
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:            "vg_name": "ceph_vg2"
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:        }
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]:    ]
Dec  3 16:10:02 np0005544708 relaxed_volhard[94475]: }
Dec  3 16:10:02 np0005544708 systemd[1]: libpod-82ec1722127ae299a52d45151adf2a5dc205db6c28e92bbee53decb4cb67ff3d.scope: Deactivated successfully.
Dec  3 16:10:02 np0005544708 podman[94509]: 2025-12-03 21:10:02.494494613 +0000 UTC m=+0.026438897 container died 82ec1722127ae299a52d45151adf2a5dc205db6c28e92bbee53decb4cb67ff3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_volhard, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec  3 16:10:02 np0005544708 systemd[1]: var-lib-containers-storage-overlay-d7b01352322ed5d62984b3c896d782745067f29827a591abbf145e30e9d4dca3-merged.mount: Deactivated successfully.
Dec  3 16:10:02 np0005544708 podman[94509]: 2025-12-03 21:10:02.592981425 +0000 UTC m=+0.124925739 container remove 82ec1722127ae299a52d45151adf2a5dc205db6c28e92bbee53decb4cb67ff3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_volhard, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec  3 16:10:02 np0005544708 systemd[1]: libpod-conmon-82ec1722127ae299a52d45151adf2a5dc205db6c28e92bbee53decb4cb67ff3d.scope: Deactivated successfully.
Dec  3 16:10:02 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  3 16:10:02 np0005544708 stupefied_poincare[94481]: 
Dec  3 16:10:02 np0005544708 stupefied_poincare[94481]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}]
Dec  3 16:10:02 np0005544708 systemd[1]: libpod-c3e064ab20c10af36be7c7d6cbdf22a6e7a22913a43e641d957ad1f2565e51db.scope: Deactivated successfully.
Dec  3 16:10:02 np0005544708 podman[94459]: 2025-12-03 21:10:02.68634768 +0000 UTC m=+0.635370480 container died c3e064ab20c10af36be7c7d6cbdf22a6e7a22913a43e641d957ad1f2565e51db (image=quay.io/ceph/ceph:v20, name=stupefied_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  3 16:10:02 np0005544708 systemd[1]: var-lib-containers-storage-overlay-a50702146cb93b9c0787f85c5ec1692ca8685497eba0979dff255f833729f92c-merged.mount: Deactivated successfully.
Dec  3 16:10:02 np0005544708 podman[94459]: 2025-12-03 21:10:02.73271015 +0000 UTC m=+0.681732950 container remove c3e064ab20c10af36be7c7d6cbdf22a6e7a22913a43e641d957ad1f2565e51db (image=quay.io/ceph/ceph:v20, name=stupefied_poincare, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:10:02 np0005544708 systemd[1]: libpod-conmon-c3e064ab20c10af36be7c7d6cbdf22a6e7a22913a43e641d957ad1f2565e51db.scope: Deactivated successfully.
Dec  3 16:10:03 np0005544708 podman[94597]: 2025-12-03 21:10:03.090671056 +0000 UTC m=+0.035067349 container create 84596d629f7f0172fc3fc006a5c80428cb75ac5729e0f94e705b3fa292da28bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_sanderson, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:10:03 np0005544708 systemd[1]: Started libpod-conmon-84596d629f7f0172fc3fc006a5c80428cb75ac5729e0f94e705b3fa292da28bf.scope.
Dec  3 16:10:03 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v73: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Dec  3 16:10:03 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:10:03 np0005544708 podman[94597]: 2025-12-03 21:10:03.074927125 +0000 UTC m=+0.019323438 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:10:03 np0005544708 podman[94597]: 2025-12-03 21:10:03.176818528 +0000 UTC m=+0.121214831 container init 84596d629f7f0172fc3fc006a5c80428cb75ac5729e0f94e705b3fa292da28bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_sanderson, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  3 16:10:03 np0005544708 podman[94597]: 2025-12-03 21:10:03.187166384 +0000 UTC m=+0.131562707 container start 84596d629f7f0172fc3fc006a5c80428cb75ac5729e0f94e705b3fa292da28bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_sanderson, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:10:03 np0005544708 podman[94597]: 2025-12-03 21:10:03.191596203 +0000 UTC m=+0.135992506 container attach 84596d629f7f0172fc3fc006a5c80428cb75ac5729e0f94e705b3fa292da28bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_sanderson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  3 16:10:03 np0005544708 nervous_sanderson[94613]: 167 167
Dec  3 16:10:03 np0005544708 systemd[1]: libpod-84596d629f7f0172fc3fc006a5c80428cb75ac5729e0f94e705b3fa292da28bf.scope: Deactivated successfully.
Dec  3 16:10:03 np0005544708 podman[94597]: 2025-12-03 21:10:03.194698046 +0000 UTC m=+0.139094419 container died 84596d629f7f0172fc3fc006a5c80428cb75ac5729e0f94e705b3fa292da28bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec  3 16:10:03 np0005544708 systemd[1]: var-lib-containers-storage-overlay-9aa8d0ee6222cb136c17db159405ea05988525ffd2677588ca996d4a48347ad3-merged.mount: Deactivated successfully.
Dec  3 16:10:03 np0005544708 podman[94597]: 2025-12-03 21:10:03.241082855 +0000 UTC m=+0.185479188 container remove 84596d629f7f0172fc3fc006a5c80428cb75ac5729e0f94e705b3fa292da28bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:10:03 np0005544708 ansible-async_wrapper.py[93693]: Done in kid B.
Dec  3 16:10:03 np0005544708 systemd[1]: libpod-conmon-84596d629f7f0172fc3fc006a5c80428cb75ac5729e0f94e705b3fa292da28bf.scope: Deactivated successfully.
Dec  3 16:10:03 np0005544708 podman[94637]: 2025-12-03 21:10:03.496378888 +0000 UTC m=+0.068723858 container create dfa17126c425ac3d24f7ab11616f11dfdc377876cb8443d7528368392904b2bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_brattain, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 16:10:03 np0005544708 systemd[1]: Started libpod-conmon-dfa17126c425ac3d24f7ab11616f11dfdc377876cb8443d7528368392904b2bb.scope.
Dec  3 16:10:03 np0005544708 podman[94637]: 2025-12-03 21:10:03.467181327 +0000 UTC m=+0.039526387 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:10:03 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:10:03 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c8a86f9c9b1ea49d4ffcc6a7a2b5409aae37adc3bdd5cbbfc22897c8d3e070e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:10:03 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c8a86f9c9b1ea49d4ffcc6a7a2b5409aae37adc3bdd5cbbfc22897c8d3e070e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:10:03 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c8a86f9c9b1ea49d4ffcc6a7a2b5409aae37adc3bdd5cbbfc22897c8d3e070e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:10:03 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c8a86f9c9b1ea49d4ffcc6a7a2b5409aae37adc3bdd5cbbfc22897c8d3e070e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:10:03 np0005544708 podman[94637]: 2025-12-03 21:10:03.60609526 +0000 UTC m=+0.178440310 container init dfa17126c425ac3d24f7ab11616f11dfdc377876cb8443d7528368392904b2bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 16:10:03 np0005544708 podman[94637]: 2025-12-03 21:10:03.618367587 +0000 UTC m=+0.190712597 container start dfa17126c425ac3d24f7ab11616f11dfdc377876cb8443d7528368392904b2bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_brattain, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 16:10:03 np0005544708 podman[94637]: 2025-12-03 21:10:03.623663569 +0000 UTC m=+0.196008549 container attach dfa17126c425ac3d24f7ab11616f11dfdc377876cb8443d7528368392904b2bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec  3 16:10:03 np0005544708 ceph-mds[93586]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Dec  3 16:10:03 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mds-cephfs-compute-0-gzkqle[93561]: 2025-12-03T21:10:03.729+0000 7f5195f57640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Dec  3 16:10:03 np0005544708 python3[94682]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid c21de27e-a7fd-594b-8324-0697ba9aab3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:10:03 np0005544708 podman[94685]: 2025-12-03 21:10:03.865560433 +0000 UTC m=+0.078017135 container create 7d526ed195324b4734745766ba30f4301c23debeb295d9e949190f49085887e6 (image=quay.io/ceph/ceph:v20, name=hopeful_shirley, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec  3 16:10:03 np0005544708 systemd[1]: Started libpod-conmon-7d526ed195324b4734745766ba30f4301c23debeb295d9e949190f49085887e6.scope.
Dec  3 16:10:03 np0005544708 podman[94685]: 2025-12-03 21:10:03.829834569 +0000 UTC m=+0.042291361 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:10:03 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:10:03 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/208048ec0fab4deb0ae96b8055b23eeccd37d9f0a2cd07f1177ddd01f5650586/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:10:03 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/208048ec0fab4deb0ae96b8055b23eeccd37d9f0a2cd07f1177ddd01f5650586/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:10:03 np0005544708 podman[94685]: 2025-12-03 21:10:03.955288531 +0000 UTC m=+0.167745253 container init 7d526ed195324b4734745766ba30f4301c23debeb295d9e949190f49085887e6 (image=quay.io/ceph/ceph:v20, name=hopeful_shirley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:10:03 np0005544708 podman[94685]: 2025-12-03 21:10:03.960553252 +0000 UTC m=+0.173009954 container start 7d526ed195324b4734745766ba30f4301c23debeb295d9e949190f49085887e6 (image=quay.io/ceph/ceph:v20, name=hopeful_shirley, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True)
Dec  3 16:10:03 np0005544708 podman[94685]: 2025-12-03 21:10:03.964297322 +0000 UTC m=+0.176754074 container attach 7d526ed195324b4734745766ba30f4301c23debeb295d9e949190f49085887e6 (image=quay.io/ceph/ceph:v20, name=hopeful_shirley, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec  3 16:10:04 np0005544708 lvm[94794]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:10:04 np0005544708 lvm[94797]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:10:04 np0005544708 lvm[94797]: VG ceph_vg1 finished
Dec  3 16:10:04 np0005544708 lvm[94794]: VG ceph_vg0 finished
Dec  3 16:10:04 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14250 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  3 16:10:04 np0005544708 hopeful_shirley[94710]: 
Dec  3 16:10:04 np0005544708 hopeful_shirley[94710]: [{"container_id": "4b1e1515111c", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "0.25%", "created": "2025-12-03T21:08:44.736340Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-12-03T21:08:44.799038Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-03T21:09:59.658310Z", "memory_usage": 7803502, "pending_daemon_config": false, "ports": [], "service_name": "crash", "started": "2025-12-03T21:08:44.620473Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a@crash.compute-0", "version": "20.2.0"}, {"container_id": "696a375e6a5a", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "9.87%", "created": "2025-12-03T21:09:57.986899Z", "daemon_id": "cephfs.compute-0.gzkqle", "daemon_name": "mds.cephfs.compute-0.gzkqle", "daemon_type": "mds", "events": ["2025-12-03T21:09:58.076640Z daemon:mds.cephfs.compute-0.gzkqle [INFO] \"Deployed mds.cephfs.compute-0.gzkqle on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-03T21:09:59.658609Z", "memory_usage": 17846763, "pending_daemon_config": false, "ports": [], "service_name": "mds.cephfs", "started": "2025-12-03T21:09:57.852688Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a@mds.cephfs.compute-0.gzkqle", "version": "20.2.0"}, {"container_id": "3ad5fa1a42ad", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "17.74%", "created": "2025-12-03T21:08:03.769341Z", "daemon_id": "compute-0.jxauqt", "daemon_name": "mgr.compute-0.jxauqt", "daemon_type": "mgr", "events": ["2025-12-03T21:08:49.588947Z daemon:mgr.compute-0.jxauqt [INFO] \"Reconfigured mgr.compute-0.jxauqt on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-03T21:09:59.658242Z", "memory_usage": 545574092, "pending_daemon_config": false, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-12-03T21:08:03.636215Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a@mgr.compute-0.jxauqt", "version": "20.2.0"}, {"container_id": "5be1cf87f445", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "2.81%", "created": "2025-12-03T21:07:59.265229Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-12-03T21:08:48.987227Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-03T21:09:59.658153Z", "memory_request": 2147483648, "memory_usage": 38503710, "pending_daemon_config": false, "ports": [], "service_name": "mon", "started": "2025-12-03T21:08:01.518643Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a@mon.compute-0", "version": "20.2.0"}, {"container_id": "fbaf3a19f164", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.99%", "created": "2025-12-03T21:09:09.359923Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-12-03T21:09:09.428911Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-03T21:09:59.658380Z", "memory_request": 4294967296, "memory_usage": 60345548, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-03T21:09:09.272542Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a@osd.0", "version": "20.2.0"}, {"container_id": "947e483d8391", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "2.10%", "created": "2025-12-03T21:09:14.077017Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-12-03T21:09:14.157357Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-03T21:09:59.658448Z", "memory_request": 4294967296, "memory_usage": 58395197, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-03T21:09:13.776336Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a@osd.1", "version": "20.2.0"}, {"container_id": "f54ced40cf6e", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "2.16%", "created": "2025-12-03T21:09:18.678110Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-12-03T21:09:18.819614Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-03T21:09:59.658518Z", "memory_request": 4294967296, "memory_usage": 56623104, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-03T21:09:18.498669Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a@osd.2", "version": "20.2.0"}]
Dec  3 16:10:04 np0005544708 lvm[94800]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:10:04 np0005544708 lvm[94800]: VG ceph_vg2 finished
Dec  3 16:10:04 np0005544708 systemd[1]: libpod-7d526ed195324b4734745766ba30f4301c23debeb295d9e949190f49085887e6.scope: Deactivated successfully.
Dec  3 16:10:04 np0005544708 podman[94685]: 2025-12-03 21:10:04.381867691 +0000 UTC m=+0.594324393 container died 7d526ed195324b4734745766ba30f4301c23debeb295d9e949190f49085887e6 (image=quay.io/ceph/ceph:v20, name=hopeful_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:10:04 np0005544708 lvm[94803]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:10:04 np0005544708 lvm[94803]: VG ceph_vg0 finished
Dec  3 16:10:04 np0005544708 systemd[1]: var-lib-containers-storage-overlay-208048ec0fab4deb0ae96b8055b23eeccd37d9f0a2cd07f1177ddd01f5650586-merged.mount: Deactivated successfully.
Dec  3 16:10:04 np0005544708 podman[94685]: 2025-12-03 21:10:04.423811282 +0000 UTC m=+0.636267984 container remove 7d526ed195324b4734745766ba30f4301c23debeb295d9e949190f49085887e6 (image=quay.io/ceph/ceph:v20, name=hopeful_shirley, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec  3 16:10:04 np0005544708 systemd[1]: libpod-conmon-7d526ed195324b4734745766ba30f4301c23debeb295d9e949190f49085887e6.scope: Deactivated successfully.
Dec  3 16:10:04 np0005544708 blissful_brattain[94677]: {}
Dec  3 16:10:04 np0005544708 systemd[1]: libpod-dfa17126c425ac3d24f7ab11616f11dfdc377876cb8443d7528368392904b2bb.scope: Deactivated successfully.
Dec  3 16:10:04 np0005544708 systemd[1]: libpod-dfa17126c425ac3d24f7ab11616f11dfdc377876cb8443d7528368392904b2bb.scope: Consumed 1.365s CPU time.
Dec  3 16:10:04 np0005544708 podman[94637]: 2025-12-03 21:10:04.494492341 +0000 UTC m=+1.066837321 container died dfa17126c425ac3d24f7ab11616f11dfdc377876cb8443d7528368392904b2bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_brattain, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:10:04 np0005544708 systemd[1]: var-lib-containers-storage-overlay-4c8a86f9c9b1ea49d4ffcc6a7a2b5409aae37adc3bdd5cbbfc22897c8d3e070e-merged.mount: Deactivated successfully.
Dec  3 16:10:04 np0005544708 podman[94637]: 2025-12-03 21:10:04.535697122 +0000 UTC m=+1.108042102 container remove dfa17126c425ac3d24f7ab11616f11dfdc377876cb8443d7528368392904b2bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_brattain, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 16:10:04 np0005544708 systemd[1]: libpod-conmon-dfa17126c425ac3d24f7ab11616f11dfdc377876cb8443d7528368392904b2bb.scope: Deactivated successfully.
Dec  3 16:10:04 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:10:04 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:10:04 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:10:04 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:10:05 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v74: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Dec  3 16:10:05 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:10:05 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:10:05 np0005544708 podman[94948]: 2025-12-03 21:10:05.238544885 +0000 UTC m=+0.063712664 container exec 5be1cf87f4450717250955c63021ef41dba8c1099adb634f3e1f6a0b130fa916 (image=quay.io/ceph/ceph:v20, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mon-compute-0, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:10:05 np0005544708 podman[94948]: 2025-12-03 21:10:05.345963855 +0000 UTC m=+0.171131604 container exec_died 5be1cf87f4450717250955c63021ef41dba8c1099adb634f3e1f6a0b130fa916 (image=quay.io/ceph/ceph:v20, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:10:05 np0005544708 python3[94993]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid c21de27e-a7fd-594b-8324-0697ba9aab3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:10:05 np0005544708 podman[95031]: 2025-12-03 21:10:05.539818766 +0000 UTC m=+0.042999510 container create 892914653d0ecde6f4e7ea3c291075f718b8686ab7104278f691359f55d8f5ec (image=quay.io/ceph/ceph:v20, name=busy_lovelace, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:10:05 np0005544708 systemd[1]: Started libpod-conmon-892914653d0ecde6f4e7ea3c291075f718b8686ab7104278f691359f55d8f5ec.scope.
Dec  3 16:10:05 np0005544708 podman[95031]: 2025-12-03 21:10:05.521039764 +0000 UTC m=+0.024220578 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:10:05 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:10:05 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96f6e0234955d63cb0ee3111a47765b3e75c0e72414f6af508ce57629364b2ea/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:10:05 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96f6e0234955d63cb0ee3111a47765b3e75c0e72414f6af508ce57629364b2ea/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:10:05 np0005544708 podman[95031]: 2025-12-03 21:10:05.642951252 +0000 UTC m=+0.146132086 container init 892914653d0ecde6f4e7ea3c291075f718b8686ab7104278f691359f55d8f5ec (image=quay.io/ceph/ceph:v20, name=busy_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:10:05 np0005544708 podman[95031]: 2025-12-03 21:10:05.64962896 +0000 UTC m=+0.152809714 container start 892914653d0ecde6f4e7ea3c291075f718b8686ab7104278f691359f55d8f5ec (image=quay.io/ceph/ceph:v20, name=busy_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:10:05 np0005544708 podman[95031]: 2025-12-03 21:10:05.65335977 +0000 UTC m=+0.156540604 container attach 892914653d0ecde6f4e7ea3c291075f718b8686ab7104278f691359f55d8f5ec (image=quay.io/ceph/ceph:v20, name=busy_lovelace, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec  3 16:10:06 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:10:06 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:10:06 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:10:06 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:10:06 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:10:06 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:10:06 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  3 16:10:06 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:10:06 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  3 16:10:06 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:10:06 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  3 16:10:06 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  3 16:10:06 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  3 16:10:06 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:10:06 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:10:06 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:10:06 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec  3 16:10:06 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/902910606' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec  3 16:10:06 np0005544708 busy_lovelace[95064]: 
Dec  3 16:10:06 np0005544708 busy_lovelace[95064]: {"fsid":"c21de27e-a7fd-594b-8324-0697ba9aab3a","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":124,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":31,"num_osds":3,"num_up_osds":3,"osd_up_since":1764796165,"num_in_osds":3,"osd_in_since":1764796141,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":24,"data_bytes":461710,"bytes_used":83910656,"bytes_avail":64328015872,"bytes_total":64411926528,"write_bytes_sec":1194,"read_op_per_sec":0,"write_op_per_sec":3},"fsmap":{"epoch":5,"btime":"2025-12-03T21:09:59:713283+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.gzkqle","status":"up:active","gid":14242}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-03T21:09:23.137474+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Dec  3 16:10:06 np0005544708 systemd[1]: libpod-892914653d0ecde6f4e7ea3c291075f718b8686ab7104278f691359f55d8f5ec.scope: Deactivated successfully.
Dec  3 16:10:06 np0005544708 podman[95031]: 2025-12-03 21:10:06.213387787 +0000 UTC m=+0.716568531 container died 892914653d0ecde6f4e7ea3c291075f718b8686ab7104278f691359f55d8f5ec (image=quay.io/ceph/ceph:v20, name=busy_lovelace, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec  3 16:10:06 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:10:06 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:10:06 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:10:06 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:10:06 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:10:06 np0005544708 systemd[1]: var-lib-containers-storage-overlay-96f6e0234955d63cb0ee3111a47765b3e75c0e72414f6af508ce57629364b2ea-merged.mount: Deactivated successfully.
Dec  3 16:10:06 np0005544708 podman[95031]: 2025-12-03 21:10:06.263489835 +0000 UTC m=+0.766670579 container remove 892914653d0ecde6f4e7ea3c291075f718b8686ab7104278f691359f55d8f5ec (image=quay.io/ceph/ceph:v20, name=busy_lovelace, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:10:06 np0005544708 systemd[1]: libpod-conmon-892914653d0ecde6f4e7ea3c291075f718b8686ab7104278f691359f55d8f5ec.scope: Deactivated successfully.
Dec  3 16:10:06 np0005544708 podman[95251]: 2025-12-03 21:10:06.648459713 +0000 UTC m=+0.048822096 container create 5a7c4c63c01735a747b71a8fe7c2c8c78649e227869680ddc35e7bf843452352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_hypatia, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec  3 16:10:06 np0005544708 systemd[1]: Started libpod-conmon-5a7c4c63c01735a747b71a8fe7c2c8c78649e227869680ddc35e7bf843452352.scope.
Dec  3 16:10:06 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:10:06 np0005544708 podman[95251]: 2025-12-03 21:10:06.626407604 +0000 UTC m=+0.026769997 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:10:06 np0005544708 podman[95251]: 2025-12-03 21:10:06.737101451 +0000 UTC m=+0.137464054 container init 5a7c4c63c01735a747b71a8fe7c2c8c78649e227869680ddc35e7bf843452352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:10:06 np0005544708 podman[95251]: 2025-12-03 21:10:06.744119519 +0000 UTC m=+0.144481862 container start 5a7c4c63c01735a747b71a8fe7c2c8c78649e227869680ddc35e7bf843452352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec  3 16:10:06 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:10:06 np0005544708 podman[95251]: 2025-12-03 21:10:06.747816008 +0000 UTC m=+0.148178431 container attach 5a7c4c63c01735a747b71a8fe7c2c8c78649e227869680ddc35e7bf843452352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_hypatia, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec  3 16:10:06 np0005544708 affectionate_hypatia[95267]: 167 167
Dec  3 16:10:06 np0005544708 systemd[1]: libpod-5a7c4c63c01735a747b71a8fe7c2c8c78649e227869680ddc35e7bf843452352.scope: Deactivated successfully.
Dec  3 16:10:06 np0005544708 podman[95251]: 2025-12-03 21:10:06.750299824 +0000 UTC m=+0.150662197 container died 5a7c4c63c01735a747b71a8fe7c2c8c78649e227869680ddc35e7bf843452352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec  3 16:10:06 np0005544708 systemd[1]: var-lib-containers-storage-overlay-3eff01d82869ee7924cf38b66159feb6dcfc2bdbb8f92e22e5b08456114af9d3-merged.mount: Deactivated successfully.
Dec  3 16:10:06 np0005544708 podman[95251]: 2025-12-03 21:10:06.797771183 +0000 UTC m=+0.198133526 container remove 5a7c4c63c01735a747b71a8fe7c2c8c78649e227869680ddc35e7bf843452352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_hypatia, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:10:06 np0005544708 systemd[1]: libpod-conmon-5a7c4c63c01735a747b71a8fe7c2c8c78649e227869680ddc35e7bf843452352.scope: Deactivated successfully.
Dec  3 16:10:06 np0005544708 podman[95290]: 2025-12-03 21:10:06.962754102 +0000 UTC m=+0.053563242 container create 81596905cca1fd04a2bbc2e1a7fa7fc3ce8a039c448ba1f6c809543e59841d3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_khorana, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  3 16:10:07 np0005544708 systemd[1]: Started libpod-conmon-81596905cca1fd04a2bbc2e1a7fa7fc3ce8a039c448ba1f6c809543e59841d3e.scope.
Dec  3 16:10:07 np0005544708 podman[95290]: 2025-12-03 21:10:06.932142094 +0000 UTC m=+0.022951324 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:10:07 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:10:07 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67bd12f2b4aae8216576c72e8ca5cfad23598a895e9414611775e30555ec15e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:10:07 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67bd12f2b4aae8216576c72e8ca5cfad23598a895e9414611775e30555ec15e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:10:07 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67bd12f2b4aae8216576c72e8ca5cfad23598a895e9414611775e30555ec15e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:10:07 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67bd12f2b4aae8216576c72e8ca5cfad23598a895e9414611775e30555ec15e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:10:07 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67bd12f2b4aae8216576c72e8ca5cfad23598a895e9414611775e30555ec15e1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:10:07 np0005544708 podman[95290]: 2025-12-03 21:10:07.06035696 +0000 UTC m=+0.151166140 container init 81596905cca1fd04a2bbc2e1a7fa7fc3ce8a039c448ba1f6c809543e59841d3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:10:07 np0005544708 podman[95290]: 2025-12-03 21:10:07.078728281 +0000 UTC m=+0.169537431 container start 81596905cca1fd04a2bbc2e1a7fa7fc3ce8a039c448ba1f6c809543e59841d3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_khorana, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:10:07 np0005544708 podman[95290]: 2025-12-03 21:10:07.083426357 +0000 UTC m=+0.174235527 container attach 81596905cca1fd04a2bbc2e1a7fa7fc3ce8a039c448ba1f6c809543e59841d3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_khorana, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:10:07 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Dec  3 16:10:07 np0005544708 python3[95337]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid c21de27e-a7fd-594b-8324-0697ba9aab3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:10:07 np0005544708 podman[95338]: 2025-12-03 21:10:07.354245254 +0000 UTC m=+0.051942889 container create 7fa5aefafafd689fd809499a678e308dd1b2f0a363d48c00a873141d480c0251 (image=quay.io/ceph/ceph:v20, name=lucid_chaplygin, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec  3 16:10:07 np0005544708 systemd[1]: Started libpod-conmon-7fa5aefafafd689fd809499a678e308dd1b2f0a363d48c00a873141d480c0251.scope.
Dec  3 16:10:07 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:10:07 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/706499fe9869319bbf32c0a510d2a20af3962384e4201e8bb50e3e6e1061a4c8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:10:07 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/706499fe9869319bbf32c0a510d2a20af3962384e4201e8bb50e3e6e1061a4c8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:10:07 np0005544708 podman[95338]: 2025-12-03 21:10:07.418717227 +0000 UTC m=+0.116414902 container init 7fa5aefafafd689fd809499a678e308dd1b2f0a363d48c00a873141d480c0251 (image=quay.io/ceph/ceph:v20, name=lucid_chaplygin, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:10:07 np0005544708 podman[95338]: 2025-12-03 21:10:07.424989305 +0000 UTC m=+0.122686990 container start 7fa5aefafafd689fd809499a678e308dd1b2f0a363d48c00a873141d480c0251 (image=quay.io/ceph/ceph:v20, name=lucid_chaplygin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 16:10:07 np0005544708 podman[95338]: 2025-12-03 21:10:07.335640587 +0000 UTC m=+0.033338232 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:10:07 np0005544708 podman[95338]: 2025-12-03 21:10:07.428921799 +0000 UTC m=+0.126619494 container attach 7fa5aefafafd689fd809499a678e308dd1b2f0a363d48c00a873141d480c0251 (image=quay.io/ceph/ceph:v20, name=lucid_chaplygin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:10:07 np0005544708 frosty_khorana[95307]: --> passed data devices: 0 physical, 3 LVM
Dec  3 16:10:07 np0005544708 frosty_khorana[95307]: --> All data devices are unavailable
Dec  3 16:10:07 np0005544708 systemd[1]: libpod-81596905cca1fd04a2bbc2e1a7fa7fc3ce8a039c448ba1f6c809543e59841d3e.scope: Deactivated successfully.
Dec  3 16:10:07 np0005544708 podman[95290]: 2025-12-03 21:10:07.613099161 +0000 UTC m=+0.703908291 container died 81596905cca1fd04a2bbc2e1a7fa7fc3ce8a039c448ba1f6c809543e59841d3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_khorana, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:10:07 np0005544708 systemd[1]: var-lib-containers-storage-overlay-67bd12f2b4aae8216576c72e8ca5cfad23598a895e9414611775e30555ec15e1-merged.mount: Deactivated successfully.
Dec  3 16:10:07 np0005544708 podman[95290]: 2025-12-03 21:10:07.655443673 +0000 UTC m=+0.746252803 container remove 81596905cca1fd04a2bbc2e1a7fa7fc3ce8a039c448ba1f6c809543e59841d3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_khorana, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  3 16:10:07 np0005544708 systemd[1]: libpod-conmon-81596905cca1fd04a2bbc2e1a7fa7fc3ce8a039c448ba1f6c809543e59841d3e.scope: Deactivated successfully.
Dec  3 16:10:07 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  3 16:10:07 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2874896013' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec  3 16:10:07 np0005544708 lucid_chaplygin[95357]: 
Dec  3 16:10:07 np0005544708 lucid_chaplygin[95357]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""}]
Dec  3 16:10:07 np0005544708 systemd[1]: libpod-7fa5aefafafd689fd809499a678e308dd1b2f0a363d48c00a873141d480c0251.scope: Deactivated successfully.
Dec  3 16:10:07 np0005544708 podman[95338]: 2025-12-03 21:10:07.812220523 +0000 UTC m=+0.509918188 container died 7fa5aefafafd689fd809499a678e308dd1b2f0a363d48c00a873141d480c0251 (image=quay.io/ceph/ceph:v20, name=lucid_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:10:07 np0005544708 systemd[1]: var-lib-containers-storage-overlay-706499fe9869319bbf32c0a510d2a20af3962384e4201e8bb50e3e6e1061a4c8-merged.mount: Deactivated successfully.
Dec  3 16:10:07 np0005544708 podman[95338]: 2025-12-03 21:10:07.855338586 +0000 UTC m=+0.553036241 container remove 7fa5aefafafd689fd809499a678e308dd1b2f0a363d48c00a873141d480c0251 (image=quay.io/ceph/ceph:v20, name=lucid_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:10:07 np0005544708 systemd[1]: libpod-conmon-7fa5aefafafd689fd809499a678e308dd1b2f0a363d48c00a873141d480c0251.scope: Deactivated successfully.
Dec  3 16:10:08 np0005544708 podman[95480]: 2025-12-03 21:10:08.094184608 +0000 UTC m=+0.045954660 container create 27dc558dee46a242ca80d16aced2a3d9a3dfffcd73d5cd2407120cd64208e423 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_mendel, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:10:08 np0005544708 systemd[1]: Started libpod-conmon-27dc558dee46a242ca80d16aced2a3d9a3dfffcd73d5cd2407120cd64208e423.scope.
Dec  3 16:10:08 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:10:08 np0005544708 podman[95480]: 2025-12-03 21:10:08.074737118 +0000 UTC m=+0.026507200 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:10:08 np0005544708 podman[95480]: 2025-12-03 21:10:08.182169599 +0000 UTC m=+0.133939681 container init 27dc558dee46a242ca80d16aced2a3d9a3dfffcd73d5cd2407120cd64208e423 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_mendel, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec  3 16:10:08 np0005544708 podman[95480]: 2025-12-03 21:10:08.193405719 +0000 UTC m=+0.145175771 container start 27dc558dee46a242ca80d16aced2a3d9a3dfffcd73d5cd2407120cd64208e423 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_mendel, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  3 16:10:08 np0005544708 podman[95480]: 2025-12-03 21:10:08.19714017 +0000 UTC m=+0.148910252 container attach 27dc558dee46a242ca80d16aced2a3d9a3dfffcd73d5cd2407120cd64208e423 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_mendel, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 16:10:08 np0005544708 laughing_mendel[95496]: 167 167
Dec  3 16:10:08 np0005544708 systemd[1]: libpod-27dc558dee46a242ca80d16aced2a3d9a3dfffcd73d5cd2407120cd64208e423.scope: Deactivated successfully.
Dec  3 16:10:08 np0005544708 podman[95480]: 2025-12-03 21:10:08.199859192 +0000 UTC m=+0.151629274 container died 27dc558dee46a242ca80d16aced2a3d9a3dfffcd73d5cd2407120cd64208e423 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_mendel, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:10:08 np0005544708 systemd[1]: var-lib-containers-storage-overlay-8fdb3724f8c0b27f64a6475d1b2666ec8265e833f841d5fec5e05ebf84928608-merged.mount: Deactivated successfully.
Dec  3 16:10:08 np0005544708 podman[95480]: 2025-12-03 21:10:08.252858559 +0000 UTC m=+0.204628621 container remove 27dc558dee46a242ca80d16aced2a3d9a3dfffcd73d5cd2407120cd64208e423 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:10:08 np0005544708 systemd[1]: libpod-conmon-27dc558dee46a242ca80d16aced2a3d9a3dfffcd73d5cd2407120cd64208e423.scope: Deactivated successfully.
Dec  3 16:10:08 np0005544708 podman[95521]: 2025-12-03 21:10:08.467292499 +0000 UTC m=+0.059347737 container create ade57dacec19039c2e468ae140c208dd15ed56bbab80f74f7b8bcbfff43bcc63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_matsumoto, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:10:08 np0005544708 systemd[1]: Started libpod-conmon-ade57dacec19039c2e468ae140c208dd15ed56bbab80f74f7b8bcbfff43bcc63.scope.
Dec  3 16:10:08 np0005544708 podman[95521]: 2025-12-03 21:10:08.441652474 +0000 UTC m=+0.033707742 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:10:08 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:10:08 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cc9e8a2d31ffe6e30f49b67f11570888b31b2f80a1171b34f34307587bb2a55/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:10:08 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cc9e8a2d31ffe6e30f49b67f11570888b31b2f80a1171b34f34307587bb2a55/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:10:08 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cc9e8a2d31ffe6e30f49b67f11570888b31b2f80a1171b34f34307587bb2a55/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:10:08 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cc9e8a2d31ffe6e30f49b67f11570888b31b2f80a1171b34f34307587bb2a55/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:10:08 np0005544708 podman[95521]: 2025-12-03 21:10:08.581367638 +0000 UTC m=+0.173422946 container init ade57dacec19039c2e468ae140c208dd15ed56bbab80f74f7b8bcbfff43bcc63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec  3 16:10:08 np0005544708 podman[95521]: 2025-12-03 21:10:08.597979912 +0000 UTC m=+0.190035180 container start ade57dacec19039c2e468ae140c208dd15ed56bbab80f74f7b8bcbfff43bcc63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_matsumoto, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:10:08 np0005544708 podman[95521]: 2025-12-03 21:10:08.601919556 +0000 UTC m=+0.193974884 container attach ade57dacec19039c2e468ae140c208dd15ed56bbab80f74f7b8bcbfff43bcc63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_matsumoto, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]: {
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:    "0": [
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:        {
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:            "devices": [
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "/dev/loop3"
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:            ],
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:            "lv_name": "ceph_lv0",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:            "lv_size": "21470642176",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d33bc95-baf8-481d-bc78-3b15ffd29872,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:            "lv_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:            "name": "ceph_lv0",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:            "tags": {
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.block_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.cluster_name": "ceph",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.crush_device_class": "",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.encrypted": "0",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.objectstore": "bluestore",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.osd_fsid": "4d33bc95-baf8-481d-bc78-3b15ffd29872",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.osd_id": "0",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.type": "block",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.vdo": "0",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.with_tpm": "0"
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:            },
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:            "type": "block",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:            "vg_name": "ceph_vg0"
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:        }
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:    ],
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:    "1": [
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:        {
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:            "devices": [
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "/dev/loop4"
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:            ],
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:            "lv_name": "ceph_lv1",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:            "lv_size": "21470642176",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c4086f1b-ff53-4e63-8dc0-011238d77976,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:            "lv_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:            "name": "ceph_lv1",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:            "tags": {
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.block_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.cluster_name": "ceph",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.crush_device_class": "",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.encrypted": "0",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.objectstore": "bluestore",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.osd_fsid": "c4086f1b-ff53-4e63-8dc0-011238d77976",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.osd_id": "1",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.type": "block",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.vdo": "0",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.with_tpm": "0"
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:            },
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:            "type": "block",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:            "vg_name": "ceph_vg1"
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:        }
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:    ],
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:    "2": [
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:        {
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:            "devices": [
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "/dev/loop5"
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:            ],
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:            "lv_name": "ceph_lv2",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:            "lv_size": "21470642176",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=abcd6a67-9013-4470-978f-f75da5f33cd4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:            "lv_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:            "name": "ceph_lv2",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:            "tags": {
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.block_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.cluster_name": "ceph",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.crush_device_class": "",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.encrypted": "0",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.objectstore": "bluestore",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.osd_fsid": "abcd6a67-9013-4470-978f-f75da5f33cd4",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.osd_id": "2",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.type": "block",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.vdo": "0",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:                "ceph.with_tpm": "0"
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:            },
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:            "type": "block",
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:            "vg_name": "ceph_vg2"
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:        }
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]:    ]
Dec  3 16:10:08 np0005544708 quizzical_matsumoto[95537]: }
Dec  3 16:10:08 np0005544708 systemd[1]: libpod-ade57dacec19039c2e468ae140c208dd15ed56bbab80f74f7b8bcbfff43bcc63.scope: Deactivated successfully.
Dec  3 16:10:08 np0005544708 podman[95521]: 2025-12-03 21:10:08.944735878 +0000 UTC m=+0.536791176 container died ade57dacec19039c2e468ae140c208dd15ed56bbab80f74f7b8bcbfff43bcc63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:10:08 np0005544708 python3[95569]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid c21de27e-a7fd-594b-8324-0697ba9aab3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:10:08 np0005544708 systemd[1]: var-lib-containers-storage-overlay-7cc9e8a2d31ffe6e30f49b67f11570888b31b2f80a1171b34f34307587bb2a55-merged.mount: Deactivated successfully.
Dec  3 16:10:09 np0005544708 podman[95521]: 2025-12-03 21:10:09.008846851 +0000 UTC m=+0.600902079 container remove ade57dacec19039c2e468ae140c208dd15ed56bbab80f74f7b8bcbfff43bcc63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 16:10:09 np0005544708 systemd[1]: libpod-conmon-ade57dacec19039c2e468ae140c208dd15ed56bbab80f74f7b8bcbfff43bcc63.scope: Deactivated successfully.
Dec  3 16:10:09 np0005544708 podman[95579]: 2025-12-03 21:10:09.047476743 +0000 UTC m=+0.051313812 container create 44689a57d39df38cec03bdec4d8fa8ba5618c933bf1a95f3cd006a080590a222 (image=quay.io/ceph/ceph:v20, name=admiring_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:10:09 np0005544708 systemd[1]: Started libpod-conmon-44689a57d39df38cec03bdec4d8fa8ba5618c933bf1a95f3cd006a080590a222.scope.
Dec  3 16:10:09 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:10:09 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ef03a25828328613f77757efda3eff99697b4e89e1e65b78aa6c6fb0c253163/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:10:09 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ef03a25828328613f77757efda3eff99697b4e89e1e65b78aa6c6fb0c253163/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:10:09 np0005544708 podman[95579]: 2025-12-03 21:10:09.118241904 +0000 UTC m=+0.122079003 container init 44689a57d39df38cec03bdec4d8fa8ba5618c933bf1a95f3cd006a080590a222 (image=quay.io/ceph/ceph:v20, name=admiring_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:10:09 np0005544708 podman[95579]: 2025-12-03 21:10:09.02488595 +0000 UTC m=+0.028723099 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:10:09 np0005544708 podman[95579]: 2025-12-03 21:10:09.125341534 +0000 UTC m=+0.129178603 container start 44689a57d39df38cec03bdec4d8fa8ba5618c933bf1a95f3cd006a080590a222 (image=quay.io/ceph/ceph:v20, name=admiring_wu, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec  3 16:10:09 np0005544708 podman[95579]: 2025-12-03 21:10:09.128811887 +0000 UTC m=+0.132649036 container attach 44689a57d39df38cec03bdec4d8fa8ba5618c933bf1a95f3cd006a080590a222 (image=quay.io/ceph/ceph:v20, name=admiring_wu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:10:09 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v76: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Dec  3 16:10:09 np0005544708 podman[95687]: 2025-12-03 21:10:09.427870529 +0000 UTC m=+0.036639880 container create 835a9b849050b7a531406c1bfd9d1b519051960c8588ee0ff1f5a6a84b71f509 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_cartwright, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  3 16:10:09 np0005544708 systemd[1]: Started libpod-conmon-835a9b849050b7a531406c1bfd9d1b519051960c8588ee0ff1f5a6a84b71f509.scope.
Dec  3 16:10:09 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:10:09 np0005544708 podman[95687]: 2025-12-03 21:10:09.499296787 +0000 UTC m=+0.108066218 container init 835a9b849050b7a531406c1bfd9d1b519051960c8588ee0ff1f5a6a84b71f509 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_cartwright, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:10:09 np0005544708 podman[95687]: 2025-12-03 21:10:09.505829853 +0000 UTC m=+0.114599224 container start 835a9b849050b7a531406c1bfd9d1b519051960c8588ee0ff1f5a6a84b71f509 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_cartwright, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec  3 16:10:09 np0005544708 podman[95687]: 2025-12-03 21:10:09.411630885 +0000 UTC m=+0.020400266 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:10:09 np0005544708 funny_cartwright[95704]: 167 167
Dec  3 16:10:09 np0005544708 systemd[1]: libpod-835a9b849050b7a531406c1bfd9d1b519051960c8588ee0ff1f5a6a84b71f509.scope: Deactivated successfully.
Dec  3 16:10:09 np0005544708 podman[95687]: 2025-12-03 21:10:09.511380421 +0000 UTC m=+0.120149772 container attach 835a9b849050b7a531406c1bfd9d1b519051960c8588ee0ff1f5a6a84b71f509 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:10:09 np0005544708 podman[95687]: 2025-12-03 21:10:09.51172172 +0000 UTC m=+0.120491081 container died 835a9b849050b7a531406c1bfd9d1b519051960c8588ee0ff1f5a6a84b71f509 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_cartwright, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:10:09 np0005544708 systemd[1]: var-lib-containers-storage-overlay-6eba33053ae3666065742016c25cacf9b6ba836a6b9ee652510fd282dc06c835-merged.mount: Deactivated successfully.
Dec  3 16:10:09 np0005544708 podman[95687]: 2025-12-03 21:10:09.551678697 +0000 UTC m=+0.160448068 container remove 835a9b849050b7a531406c1bfd9d1b519051960c8588ee0ff1f5a6a84b71f509 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_cartwright, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec  3 16:10:09 np0005544708 systemd[1]: libpod-conmon-835a9b849050b7a531406c1bfd9d1b519051960c8588ee0ff1f5a6a84b71f509.scope: Deactivated successfully.
Dec  3 16:10:09 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Dec  3 16:10:09 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1387891732' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Dec  3 16:10:09 np0005544708 admiring_wu[95625]: mimic
Dec  3 16:10:09 np0005544708 systemd[1]: libpod-44689a57d39df38cec03bdec4d8fa8ba5618c933bf1a95f3cd006a080590a222.scope: Deactivated successfully.
Dec  3 16:10:09 np0005544708 podman[95579]: 2025-12-03 21:10:09.609824512 +0000 UTC m=+0.613661591 container died 44689a57d39df38cec03bdec4d8fa8ba5618c933bf1a95f3cd006a080590a222 (image=quay.io/ceph/ceph:v20, name=admiring_wu, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec  3 16:10:09 np0005544708 systemd[1]: var-lib-containers-storage-overlay-0ef03a25828328613f77757efda3eff99697b4e89e1e65b78aa6c6fb0c253163-merged.mount: Deactivated successfully.
Dec  3 16:10:09 np0005544708 podman[95579]: 2025-12-03 21:10:09.658780629 +0000 UTC m=+0.662617688 container remove 44689a57d39df38cec03bdec4d8fa8ba5618c933bf1a95f3cd006a080590a222 (image=quay.io/ceph/ceph:v20, name=admiring_wu, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  3 16:10:09 np0005544708 systemd[1]: libpod-conmon-44689a57d39df38cec03bdec4d8fa8ba5618c933bf1a95f3cd006a080590a222.scope: Deactivated successfully.
Dec  3 16:10:09 np0005544708 podman[95743]: 2025-12-03 21:10:09.742206149 +0000 UTC m=+0.045888537 container create 7b01378c563676f5651f29c0979aa1f365c62a3fb550d22bc1fbd5d9e53884e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_grothendieck, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:10:09 np0005544708 systemd[1]: Started libpod-conmon-7b01378c563676f5651f29c0979aa1f365c62a3fb550d22bc1fbd5d9e53884e8.scope.
Dec  3 16:10:09 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:10:09 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13772b37e426b7351eaa497578fb44c79c9f10926652252025c4b257ff648830/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:10:09 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13772b37e426b7351eaa497578fb44c79c9f10926652252025c4b257ff648830/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:10:09 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13772b37e426b7351eaa497578fb44c79c9f10926652252025c4b257ff648830/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:10:09 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13772b37e426b7351eaa497578fb44c79c9f10926652252025c4b257ff648830/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:10:09 np0005544708 podman[95743]: 2025-12-03 21:10:09.724064695 +0000 UTC m=+0.027747133 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:10:09 np0005544708 podman[95743]: 2025-12-03 21:10:09.831231728 +0000 UTC m=+0.134914136 container init 7b01378c563676f5651f29c0979aa1f365c62a3fb550d22bc1fbd5d9e53884e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 16:10:09 np0005544708 podman[95743]: 2025-12-03 21:10:09.841225006 +0000 UTC m=+0.144907434 container start 7b01378c563676f5651f29c0979aa1f365c62a3fb550d22bc1fbd5d9e53884e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_grothendieck, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec  3 16:10:09 np0005544708 podman[95743]: 2025-12-03 21:10:09.845947921 +0000 UTC m=+0.149630329 container attach 7b01378c563676f5651f29c0979aa1f365c62a3fb550d22bc1fbd5d9e53884e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_grothendieck, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:10:10 np0005544708 lvm[95862]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:10:10 np0005544708 lvm[95862]: VG ceph_vg1 finished
Dec  3 16:10:10 np0005544708 lvm[95861]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:10:10 np0005544708 lvm[95861]: VG ceph_vg0 finished
Dec  3 16:10:10 np0005544708 lvm[95866]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:10:10 np0005544708 lvm[95866]: VG ceph_vg2 finished
Dec  3 16:10:10 np0005544708 mystifying_grothendieck[95759]: {}
Dec  3 16:10:10 np0005544708 python3[95865]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid c21de27e-a7fd-594b-8324-0697ba9aab3a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:10:10 np0005544708 systemd[1]: libpod-7b01378c563676f5651f29c0979aa1f365c62a3fb550d22bc1fbd5d9e53884e8.scope: Deactivated successfully.
Dec  3 16:10:10 np0005544708 systemd[1]: libpod-7b01378c563676f5651f29c0979aa1f365c62a3fb550d22bc1fbd5d9e53884e8.scope: Consumed 1.353s CPU time.
Dec  3 16:10:10 np0005544708 podman[95869]: 2025-12-03 21:10:10.771738979 +0000 UTC m=+0.051504408 container create e878c72c10358f82b7658245fdc47eb21e5b579574efac2e3b1bea7a27e5b1b7 (image=quay.io/ceph/ceph:v20, name=flamboyant_fermat, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:10:10 np0005544708 podman[95875]: 2025-12-03 21:10:10.784417247 +0000 UTC m=+0.045354322 container died 7b01378c563676f5651f29c0979aa1f365c62a3fb550d22bc1fbd5d9e53884e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_grothendieck, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:10:10 np0005544708 systemd[1]: Started libpod-conmon-e878c72c10358f82b7658245fdc47eb21e5b579574efac2e3b1bea7a27e5b1b7.scope.
Dec  3 16:10:10 np0005544708 systemd[1]: var-lib-containers-storage-overlay-13772b37e426b7351eaa497578fb44c79c9f10926652252025c4b257ff648830-merged.mount: Deactivated successfully.
Dec  3 16:10:10 np0005544708 podman[95875]: 2025-12-03 21:10:10.834606598 +0000 UTC m=+0.095543643 container remove 7b01378c563676f5651f29c0979aa1f365c62a3fb550d22bc1fbd5d9e53884e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_grothendieck, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:10:10 np0005544708 podman[95869]: 2025-12-03 21:10:10.754041175 +0000 UTC m=+0.033806604 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec  3 16:10:10 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:10:10 np0005544708 systemd[1]: libpod-conmon-7b01378c563676f5651f29c0979aa1f365c62a3fb550d22bc1fbd5d9e53884e8.scope: Deactivated successfully.
Dec  3 16:10:10 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8084179a9d395d88cbab574af8b511bca26964c7c3601ee53ffb43a1dfa7d4a5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:10:10 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8084179a9d395d88cbab574af8b511bca26964c7c3601ee53ffb43a1dfa7d4a5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:10:10 np0005544708 podman[95869]: 2025-12-03 21:10:10.867165789 +0000 UTC m=+0.146931218 container init e878c72c10358f82b7658245fdc47eb21e5b579574efac2e3b1bea7a27e5b1b7 (image=quay.io/ceph/ceph:v20, name=flamboyant_fermat, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:10:10 np0005544708 podman[95869]: 2025-12-03 21:10:10.874671649 +0000 UTC m=+0.154437048 container start e878c72c10358f82b7658245fdc47eb21e5b579574efac2e3b1bea7a27e5b1b7 (image=quay.io/ceph/ceph:v20, name=flamboyant_fermat, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3)
Dec  3 16:10:10 np0005544708 podman[95869]: 2025-12-03 21:10:10.877762082 +0000 UTC m=+0.157527491 container attach e878c72c10358f82b7658245fdc47eb21e5b579574efac2e3b1bea7a27e5b1b7 (image=quay.io/ceph/ceph:v20, name=flamboyant_fermat, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec  3 16:10:10 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:10:10 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:10:10 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:10:10 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:10:11 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Dec  3 16:10:11 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Dec  3 16:10:11 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3023222579' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Dec  3 16:10:11 np0005544708 flamboyant_fermat[95899]: 
Dec  3 16:10:11 np0005544708 flamboyant_fermat[95899]: {"mon":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"mgr":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"osd":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":3},"mds":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"overall":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":6}}
Dec  3 16:10:11 np0005544708 systemd[1]: libpod-e878c72c10358f82b7658245fdc47eb21e5b579574efac2e3b1bea7a27e5b1b7.scope: Deactivated successfully.
Dec  3 16:10:11 np0005544708 podman[95869]: 2025-12-03 21:10:11.428019574 +0000 UTC m=+0.707785003 container died e878c72c10358f82b7658245fdc47eb21e5b579574efac2e3b1bea7a27e5b1b7 (image=quay.io/ceph/ceph:v20, name=flamboyant_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec  3 16:10:11 np0005544708 systemd[1]: var-lib-containers-storage-overlay-8084179a9d395d88cbab574af8b511bca26964c7c3601ee53ffb43a1dfa7d4a5-merged.mount: Deactivated successfully.
Dec  3 16:10:11 np0005544708 podman[95869]: 2025-12-03 21:10:11.478920074 +0000 UTC m=+0.758685503 container remove e878c72c10358f82b7658245fdc47eb21e5b579574efac2e3b1bea7a27e5b1b7 (image=quay.io/ceph/ceph:v20, name=flamboyant_fermat, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:10:11 np0005544708 systemd[1]: libpod-conmon-e878c72c10358f82b7658245fdc47eb21e5b579574efac2e3b1bea7a27e5b1b7.scope: Deactivated successfully.
Dec  3 16:10:11 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:10:11 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:10:11 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:10:13 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v78: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:10:15 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v79: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:10:16 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:10:17 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v80: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:10:19 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v81: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:10:21 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v82: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:10:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Optimize plan auto_2025-12-03_21:10:21
Dec  3 16:10:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 16:10:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] do_upmap
Dec  3 16:10:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] pools ['vms', 'images', '.mgr', 'backups', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Dec  3 16:10:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] prepared 0/10 upmap changes
Dec  3 16:10:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:10:21 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 16:10:21 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:10:21 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 16:10:21 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:10:21 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  3 16:10:21 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:10:21 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  3 16:10:21 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:10:21 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  3 16:10:21 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:10:21 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  3 16:10:21 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:10:21 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.103683295811636e-07 of space, bias 4.0, pg target 0.0007324419954973963 quantized to 16 (current 1)
Dec  3 16:10:21 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:10:21 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  3 16:10:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Dec  3 16:10:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:10:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:10:21 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Dec  3 16:10:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 16:10:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:10:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:10:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 16:10:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:10:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:10:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:10:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:10:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:10:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:10:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:10:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:10:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:10:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:10:22 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Dec  3 16:10:22 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Dec  3 16:10:22 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec  3 16:10:22 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Dec  3 16:10:22 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Dec  3 16:10:22 np0005544708 ceph-mgr[75500]: [progress INFO root] update: starting ev be9f06e2-0988-44e0-944c-a32f4623db2f (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec  3 16:10:22 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Dec  3 16:10:22 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Dec  3 16:10:23 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v84: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:10:23 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Dec  3 16:10:23 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Dec  3 16:10:23 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Dec  3 16:10:23 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec  3 16:10:23 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 16:10:23 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Dec  3 16:10:23 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec  3 16:10:23 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Dec  3 16:10:23 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Dec  3 16:10:23 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 33 pg[2.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=33 pruub=15.815390587s) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active pruub 80.124893188s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:23 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 33 pg[2.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=33 pruub=15.815390587s) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown pruub 80.124893188s@ mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:23 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Dec  3 16:10:23 np0005544708 ceph-mgr[75500]: [progress INFO root] update: starting ev 65be1077-8506-40cf-b9c2-7381cbe8578d (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec  3 16:10:23 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Dec  3 16:10:23 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Dec  3 16:10:24 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Dec  3 16:10:24 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec  3 16:10:24 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Dec  3 16:10:24 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.1d( empty local-lis/les=16/17 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.1e( empty local-lis/les=16/17 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.1c( empty local-lis/les=16/17 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.b( empty local-lis/les=16/17 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.a( empty local-lis/les=16/17 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.9( empty local-lis/les=16/17 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.1f( empty local-lis/les=16/17 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.8( empty local-lis/les=16/17 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.6( empty local-lis/les=16/17 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.5( empty local-lis/les=16/17 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.4( empty local-lis/les=16/17 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.3( empty local-lis/les=16/17 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.1( empty local-lis/les=16/17 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.2( empty local-lis/les=16/17 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.7( empty local-lis/les=16/17 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.c( empty local-lis/les=16/17 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.d( empty local-lis/les=16/17 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.e( empty local-lis/les=16/17 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.f( empty local-lis/les=16/17 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.10( empty local-lis/les=16/17 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.12( empty local-lis/les=16/17 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.14( empty local-lis/les=16/17 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.15( empty local-lis/les=16/17 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.13( empty local-lis/les=16/17 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:24 np0005544708 ceph-mgr[75500]: [progress INFO root] update: starting ev 164aeeb9-7441-4777-9204-3b80e724f485 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.11( empty local-lis/les=16/17 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.16( empty local-lis/les=16/17 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.17( empty local-lis/les=16/17 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.18( empty local-lis/les=16/17 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.19( empty local-lis/les=16/17 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.1a( empty local-lis/les=16/17 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.1b( empty local-lis/les=16/17 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.1e( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:24 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Dec  3 16:10:24 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Dec  3 16:10:24 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec  3 16:10:24 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 16:10:24 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.1( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.c( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.e( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.0( empty local-lis/les=33/34 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.10( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.12( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.14( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.1a( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:24 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 34 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=16/16 les/c/f=17/17/0 sis=33) [2] r=0 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:25 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v87: 38 pgs: 31 unknown, 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:10:25 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Dec  3 16:10:25 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Dec  3 16:10:25 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Dec  3 16:10:25 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Dec  3 16:10:25 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Dec  3 16:10:25 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec  3 16:10:25 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 16:10:25 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 16:10:25 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Dec  3 16:10:25 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Dec  3 16:10:25 np0005544708 ceph-mgr[75500]: [progress INFO root] update: starting ev 202c5c36-38ff-46f8-88bf-13132659b89d (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec  3 16:10:25 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0)
Dec  3 16:10:25 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Dec  3 16:10:25 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec  3 16:10:25 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Dec  3 16:10:25 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Dec  3 16:10:25 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Dec  3 16:10:25 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec  3 16:10:25 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 16:10:25 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 35 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=35 pruub=13.945694923s) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active pruub 85.778251648s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 35 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=35 pruub=13.945694923s) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown pruub 85.778251648s@ mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Dec  3 16:10:26 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Dec  3 16:10:26 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Dec  3 16:10:26 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.1b( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.1d( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.1c( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.1f( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.1e( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.a( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.9( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.8( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.7( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.6( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.5( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.1( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.3( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.4( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.2( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.b( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.c( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.d( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.e( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-mgr[75500]: [progress INFO root] update: starting ev 5e903187-e192-4410-a195-d740445e0e64 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.f( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.10( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.11( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.12( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Dec  3 16:10:26 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.13( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.14( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.15( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.16( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.17( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.18( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.19( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.1a( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.1c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.1( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.0( empty local-lis/les=35/36 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.2( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.4( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.10( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.12( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.13( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.14( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.19( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.1a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:26 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 36 pg[3.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [1] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:26 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Dec  3 16:10:26 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Dec  3 16:10:26 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Dec  3 16:10:26 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:10:26 np0005544708 ceph-mgr[75500]: [progress WARNING root] Starting Global Recovery Event,62 pgs not in active + clean state
Dec  3 16:10:26 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 35 pg[4.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=35 pruub=14.451667786s) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active pruub 91.378295898s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:26 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 36 pg[4.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=35 pruub=14.451667786s) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown pruub 91.378295898s@ mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 36 pg[4.3( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 36 pg[4.2( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 36 pg[4.5( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 36 pg[4.7( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 36 pg[4.4( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 36 pg[4.6( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 36 pg[4.9( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 36 pg[4.8( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 36 pg[4.b( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 36 pg[4.a( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 36 pg[4.d( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 36 pg[4.c( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 36 pg[4.16( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 36 pg[4.18( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 36 pg[4.17( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 36 pg[4.1a( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 36 pg[4.19( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 36 pg[4.1c( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 36 pg[4.1b( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 36 pg[4.1e( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 36 pg[4.1d( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 36 pg[4.1f( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 36 pg[4.f( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 36 pg[4.e( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 36 pg[4.11( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 36 pg[4.14( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 36 pg[4.13( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 36 pg[4.15( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 36 pg[4.10( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 36 pg[4.12( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:26 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 36 pg[4.1( empty local-lis/les=18/19 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:27 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v90: 100 pgs: 62 unknown, 38 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:10:27 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0)
Dec  3 16:10:27 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Dec  3 16:10:27 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Dec  3 16:10:27 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Dec  3 16:10:27 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Dec  3 16:10:27 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Dec  3 16:10:27 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Dec  3 16:10:27 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec  3 16:10:27 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Dec  3 16:10:27 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 16:10:27 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Dec  3 16:10:27 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Dec  3 16:10:27 np0005544708 ceph-mgr[75500]: [progress INFO root] update: starting ev 98b7555f-24a6-4dd0-9e93-6c9c469015a1 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec  3 16:10:27 np0005544708 ceph-mgr[75500]: [progress INFO root] complete: finished ev be9f06e2-0988-44e0-944c-a32f4623db2f (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec  3 16:10:27 np0005544708 ceph-mgr[75500]: [progress INFO root] Completed event be9f06e2-0988-44e0-944c-a32f4623db2f (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Dec  3 16:10:27 np0005544708 ceph-mgr[75500]: [progress INFO root] complete: finished ev 65be1077-8506-40cf-b9c2-7381cbe8578d (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec  3 16:10:27 np0005544708 ceph-mgr[75500]: [progress INFO root] Completed event 65be1077-8506-40cf-b9c2-7381cbe8578d (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Dec  3 16:10:27 np0005544708 ceph-mgr[75500]: [progress INFO root] complete: finished ev 164aeeb9-7441-4777-9204-3b80e724f485 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec  3 16:10:27 np0005544708 ceph-mgr[75500]: [progress INFO root] Completed event 164aeeb9-7441-4777-9204-3b80e724f485 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Dec  3 16:10:27 np0005544708 ceph-mgr[75500]: [progress INFO root] complete: finished ev 202c5c36-38ff-46f8-88bf-13132659b89d (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec  3 16:10:27 np0005544708 ceph-mgr[75500]: [progress INFO root] Completed event 202c5c36-38ff-46f8-88bf-13132659b89d (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Dec  3 16:10:27 np0005544708 ceph-mgr[75500]: [progress INFO root] complete: finished ev 5e903187-e192-4410-a195-d740445e0e64 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Dec  3 16:10:27 np0005544708 ceph-mgr[75500]: [progress INFO root] Completed event 5e903187-e192-4410-a195-d740445e0e64 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 1 seconds
Dec  3 16:10:27 np0005544708 ceph-mgr[75500]: [progress INFO root] complete: finished ev 98b7555f-24a6-4dd0-9e93-6c9c469015a1 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec  3 16:10:27 np0005544708 ceph-mgr[75500]: [progress INFO root] Completed event 98b7555f-24a6-4dd0-9e93-6c9c469015a1 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Dec  3 16:10:27 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 37 pg[6.0( v 31'39 (0'0,31'39] local-lis/les=21/22 n=22 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=37 pruub=8.843221664s) [0] r=0 lpr=37 pi=[21,37)/1 crt=31'39 lcod 31'38 mlcod 31'38 active pruub 86.411857605s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:27 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 37 pg[4.1f( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:27 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 37 pg[4.1d( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:27 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 37 pg[4.1e( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:27 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 37 pg[6.0( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=21/22 n=1 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=37 pruub=8.843221664s) [0] r=0 lpr=37 pi=[21,37)/1 crt=31'39 lcod 31'38 mlcod 0'0 unknown pruub 86.411857605s@ mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:27 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Dec  3 16:10:27 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Dec  3 16:10:27 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec  3 16:10:27 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Dec  3 16:10:27 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 16:10:27 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 37 pg[4.1c( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:27 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 37 pg[4.8( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:27 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 37 pg[4.b( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:27 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 37 pg[4.6( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:27 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 37 pg[4.5( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:27 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 37 pg[4.a( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:27 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 37 pg[4.1b( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:27 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 37 pg[4.4( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:27 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 37 pg[4.9( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:27 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 37 pg[4.19( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:27 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 37 pg[4.2( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:27 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 37 pg[4.1( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:27 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 37 pg[4.7( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:27 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 37 pg[4.0( empty local-lis/les=35/37 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:27 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 37 pg[4.e( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:27 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 37 pg[4.d( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:27 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 37 pg[4.f( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:27 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 37 pg[4.3( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:27 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 37 pg[4.10( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:27 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 37 pg[4.11( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:27 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 37 pg[4.14( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:27 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 37 pg[4.1a( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:27 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 37 pg[4.12( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:27 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 37 pg[4.c( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:27 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 37 pg[4.15( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:27 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 37 pg[4.13( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:27 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 37 pg[4.17( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:27 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 37 pg[4.18( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:27 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 37 pg[4.16( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=18/18 les/c/f=19/19/0 sis=35) [0] r=0 lpr=35 pi=[18,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:27 np0005544708 systemd-logind[787]: New session 33 of user zuul.
Dec  3 16:10:27 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 37 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=37 pruub=14.622339249s) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active pruub 83.146293640s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:27 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 37 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=37 pruub=14.622339249s) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown pruub 83.146293640s@ mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:27 np0005544708 systemd[1]: Started Session 33 of User zuul.
Dec  3 16:10:28 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Dec  3 16:10:28 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Dec  3 16:10:28 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Dec  3 16:10:28 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Dec  3 16:10:28 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1c( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1e( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1f( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.10( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.12( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.11( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.13( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.14( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.15( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 38 pg[6.a( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=21/22 n=1 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [0] r=0 lpr=37 pi=[21,37)/1 crt=31'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 38 pg[6.5( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=21/22 n=2 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [0] r=0 lpr=37 pi=[21,37)/1 crt=31'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 38 pg[6.9( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=21/22 n=1 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [0] r=0 lpr=37 pi=[21,37)/1 crt=31'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1d( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 38 pg[6.4( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=21/22 n=2 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [0] r=0 lpr=37 pi=[21,37)/1 crt=31'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 38 pg[6.8( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=21/22 n=1 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [0] r=0 lpr=37 pi=[21,37)/1 crt=31'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 38 pg[6.7( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=21/22 n=1 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [0] r=0 lpr=37 pi=[21,37)/1 crt=31'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 38 pg[6.1( v 31'39 (0'0,31'39] local-lis/les=21/22 n=2 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [0] r=0 lpr=37 pi=[21,37)/1 crt=31'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.17( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 38 pg[6.6( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=21/22 n=2 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [0] r=0 lpr=37 pi=[21,37)/1 crt=31'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 38 pg[6.b( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=21/22 n=1 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [0] r=0 lpr=37 pi=[21,37)/1 crt=31'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.16( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 38 pg[6.3( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=21/22 n=2 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [0] r=0 lpr=37 pi=[21,37)/1 crt=31'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 38 pg[6.2( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=21/22 n=2 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [0] r=0 lpr=37 pi=[21,37)/1 crt=31'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 38 pg[6.e( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=21/22 n=1 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [0] r=0 lpr=37 pi=[21,37)/1 crt=31'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 38 pg[6.f( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=21/22 n=1 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [0] r=0 lpr=37 pi=[21,37)/1 crt=31'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 38 pg[6.d( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=21/22 n=1 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [0] r=0 lpr=37 pi=[21,37)/1 crt=31'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 38 pg[6.c( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=21/22 n=1 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [0] r=0 lpr=37 pi=[21,37)/1 crt=31'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.8( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.9( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.a( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.b( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.7( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.6( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.3( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.5( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.4( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.2( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.f( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.e( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.c( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.d( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 38 pg[6.a( v 31'39 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [0] r=0 lpr=37 pi=[21,37)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 38 pg[6.4( v 31'39 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [0] r=0 lpr=37 pi=[21,37)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 38 pg[6.5( v 31'39 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [0] r=0 lpr=37 pi=[21,37)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1a( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1b( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.19( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.18( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.10( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.14( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.13( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.17( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.8( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.0( empty local-lis/les=37/38 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.4( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.6( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.2( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.19( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 38 pg[6.9( v 31'39 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [0] r=0 lpr=37 pi=[21,37)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 38 pg[6.1( v 31'39 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [0] r=0 lpr=37 pi=[21,37)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 38 pg[6.8( v 31'39 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [0] r=0 lpr=37 pi=[21,37)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 38 pg[6.b( v 31'39 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [0] r=0 lpr=37 pi=[21,37)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 38 pg[6.7( v 31'39 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [0] r=0 lpr=37 pi=[21,37)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 38 pg[6.0( v 31'39 (0'0,31'39] local-lis/les=37/38 n=1 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [0] r=0 lpr=37 pi=[21,37)/1 crt=31'39 lcod 31'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 38 pg[6.6( v 31'39 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [0] r=0 lpr=37 pi=[21,37)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 38 pg[6.3( v 31'39 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [0] r=0 lpr=37 pi=[21,37)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 38 pg[6.2( v 31'39 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [0] r=0 lpr=37 pi=[21,37)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 38 pg[6.e( v 31'39 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [0] r=0 lpr=37 pi=[21,37)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 38 pg[6.f( v 31'39 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [0] r=0 lpr=37 pi=[21,37)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 38 pg[6.d( v 31'39 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [0] r=0 lpr=37 pi=[21,37)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 38 pg[6.c( v 31'39 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=21/21 les/c/f=22/22/0 sis=37) [0] r=0 lpr=37 pi=[21,37)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:28 np0005544708 python3.9[96113]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 16:10:29 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v93: 146 pgs: 108 unknown, 38 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:10:29 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Dec  3 16:10:29 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Dec  3 16:10:29 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Dec  3 16:10:29 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Dec  3 16:10:29 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Dec  3 16:10:29 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Dec  3 16:10:29 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Dec  3 16:10:29 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Dec  3 16:10:29 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Dec  3 16:10:29 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 16:10:29 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Dec  3 16:10:29 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Dec  3 16:10:29 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Dec  3 16:10:29 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 39 pg[7.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=39 pruub=8.368938446s) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active pruub 83.835609436s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:29 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 39 pg[7.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=39 pruub=8.368938446s) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown pruub 83.835609436s@ mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Dec  3 16:10:30 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Dec  3 16:10:30 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec  3 16:10:30 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Dec  3 16:10:30 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.1e( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.1d( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.1c( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.11( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.13( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.16( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.10( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.17( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.15( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.14( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.a( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.b( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.9( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.8( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.f( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.6( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.4( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.5( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.7( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.1( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.2( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.3( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.c( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.d( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.e( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.1f( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.18( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.19( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.1a( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.1b( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.1e( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.12( empty local-lis/les=23/24 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.1d( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.13( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.10( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.16( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.17( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.b( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.9( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.14( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.f( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.0( empty local-lis/les=39/40 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.6( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.4( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.7( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.d( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.19( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:30 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 40 pg[7.12( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=23/23 les/c/f=24/24/0 sis=39) [1] r=0 lpr=39 pi=[23,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:30 np0005544708 python3.9[96331]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:10:31 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v96: 177 pgs: 1 peering, 31 unknown, 145 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:10:31 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Dec  3 16:10:31 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Dec  3 16:10:31 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Dec  3 16:10:31 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Dec  3 16:10:31 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.b scrub starts
Dec  3 16:10:31 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.b scrub ok
Dec  3 16:10:31 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:10:31 np0005544708 ceph-mgr[75500]: [progress INFO root] Writing back 10 completed events
Dec  3 16:10:31 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  3 16:10:31 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:10:32 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:10:33 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v97: 177 pgs: 1 peering, 31 unknown, 145 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:10:34 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Dec  3 16:10:34 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Dec  3 16:10:34 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.a scrub starts
Dec  3 16:10:34 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.a scrub ok
Dec  3 16:10:35 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v98: 177 pgs: 1 peering, 31 unknown, 145 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:10:35 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Dec  3 16:10:35 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Dec  3 16:10:35 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Dec  3 16:10:35 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Dec  3 16:10:36 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:10:37 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v99: 177 pgs: 177 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:10:37 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  3 16:10:37 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec  3 16:10:37 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  3 16:10:37 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec  3 16:10:37 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0)
Dec  3 16:10:37 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Dec  3 16:10:37 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  3 16:10:37 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec  3 16:10:37 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  3 16:10:37 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec  3 16:10:37 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  3 16:10:37 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Dec  3 16:10:37 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Dec  3 16:10:37 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec  3 16:10:37 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec  3 16:10:37 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Dec  3 16:10:37 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec  3 16:10:37 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec  3 16:10:37 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec  3 16:10:37 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 16:10:37 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 16:10:37 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Dec  3 16:10:37 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 16:10:37 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 16:10:37 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 16:10:37 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Dec  3 16:10:37 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[4.1c( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41 pruub=13.888752937s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 101.577644348s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[4.1c( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41 pruub=13.888706207s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 101.577644348s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[4.8( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41 pruub=13.888567924s) [1] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 101.577659607s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[4.8( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41 pruub=13.888529778s) [1] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 101.577659607s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[6.5( v 31'39 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.894572258s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 active pruub 102.583770752s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[6.5( v 31'39 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.894546509s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 unknown NOTIFY pruub 102.583770752s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[6.9( v 31'39 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.899888039s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 active pruub 102.589317322s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[6.9( v 31'39 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.899865150s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 unknown NOTIFY pruub 102.589317322s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[4.a( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41 pruub=13.888037682s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 101.577720642s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[4.a( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41 pruub=13.888017654s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 101.577720642s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[4.7( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41 pruub=13.888173103s) [1] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 101.577987671s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[4.7( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41 pruub=13.888152122s) [1] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 101.577987671s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[4.5( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41 pruub=13.887830734s) [1] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 101.577682495s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[4.5( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41 pruub=13.887809753s) [1] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 101.577682495s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[4.1b( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41 pruub=13.887775421s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 101.577735901s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[4.1b( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41 pruub=13.887760162s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 101.577735901s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[6.7( v 31'39 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.899380684s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 active pruub 102.589401245s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[4.1a( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41 pruub=13.888262749s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 101.578292847s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[4.1a( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41 pruub=13.888237953s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 101.578292847s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[6.7( v 31'39 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.899343491s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 unknown NOTIFY pruub 102.589401245s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[6.b( v 31'39 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.899204254s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 active pruub 102.589408875s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[4.4( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41 pruub=13.887511253s) [1] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 101.577743530s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[4.4( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41 pruub=13.887497902s) [1] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 101.577743530s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[6.b( v 31'39 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.899168015s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 unknown NOTIFY pruub 102.589408875s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[6.1( v 31'39 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.899053574s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 active pruub 102.589332581s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[6.1( v 31'39 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.899025917s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 unknown NOTIFY pruub 102.589332581s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[6.3( v 31'39 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.899174690s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 active pruub 102.589561462s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[6.3( v 31'39 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.899160385s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 unknown NOTIFY pruub 102.589561462s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[4.1( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41 pruub=13.887281418s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 101.577758789s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[4.2( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41 pruub=13.887357712s) [1] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 101.577850342s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[4.2( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41 pruub=13.887332916s) [1] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 101.577850342s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[4.9( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41 pruub=13.887207985s) [1] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 101.577751160s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[4.1( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41 pruub=13.887244225s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 101.577758789s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[4.9( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41 pruub=13.887195587s) [1] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 101.577751160s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[6.f( v 31'39 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.898865700s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 active pruub 102.589576721s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[6.f( v 31'39 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.898849487s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 unknown NOTIFY pruub 102.589576721s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[4.d( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41 pruub=13.887292862s) [1] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 101.578041077s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[4.d( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41 pruub=13.887272835s) [1] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 101.578041077s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[4.e( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41 pruub=13.887163162s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 101.578002930s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[4.e( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41 pruub=13.887149811s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 101.578002930s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[6.d( v 31'39 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.898716927s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 active pruub 102.589614868s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[4.f( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41 pruub=13.887128830s) [1] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 101.578063965s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[6.d( v 31'39 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.898696899s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 unknown NOTIFY pruub 102.589614868s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[4.f( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41 pruub=13.887117386s) [1] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 101.578063965s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[4.10( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41 pruub=13.887083054s) [1] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 101.578102112s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[4.11( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41 pruub=13.887088776s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 101.578125000s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[4.10( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41 pruub=13.887045860s) [1] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 101.578102112s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[4.13( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41 pruub=13.887156487s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 101.578285217s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[4.13( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41 pruub=13.887145042s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 101.578285217s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[4.11( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41 pruub=13.886906624s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 101.578125000s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[4.14( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41 pruub=13.886973381s) [1] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 101.578285217s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[4.14( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41 pruub=13.886903763s) [1] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 101.578285217s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[4.18( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41 pruub=13.886874199s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 101.578308105s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[4.18( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41 pruub=13.886862755s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 101.578308105s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[4.12( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41 pruub=13.886787415s) [1] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 101.578254700s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[4.12( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41 pruub=13.886762619s) [1] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 101.578254700s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.18( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1b( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[6.f( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.d( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[6.d( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.e( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.f( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[6.3( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.2( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.4( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.13( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.11( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1c( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[6.1( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.884406090s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 active pruub 93.348602295s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880822182s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 active pruub 93.345054626s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.884387016s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348602295s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880803108s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.345054626s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.853688240s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.318038940s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.853674889s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.318038940s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[6.5( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.7( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.853358269s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.318038940s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.5( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[6.7( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.853232384s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.318038940s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[6.9( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[6.b( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.9( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.8( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.14( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.12( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.10( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.867569923s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 95.975608826s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.929427147s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 active pruub 92.037498474s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.867554665s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.975608826s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.867215157s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 95.975303650s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.929405212s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.037498474s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.867195129s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.975303650s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.13( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.929409981s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 active pruub 92.037582397s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.13( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.929397583s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.037582397s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.1d( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.1b( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.852234840s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.317932129s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.866736412s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 95.975479126s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.866722107s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.975479126s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.866712570s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 95.975585938s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.852218628s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317932129s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.866684914s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.975585938s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928505898s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 active pruub 92.037483215s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928495407s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.037483215s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.12( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.866079330s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 95.975204468s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.12( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.866065025s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.975204468s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.866003036s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 95.975250244s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.865991592s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.975250244s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928308487s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 active pruub 92.037651062s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.18( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.851003647s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.317916870s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.850990295s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317916870s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.881216049s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 active pruub 93.348243713s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.881206512s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348243713s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.850802422s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.317840576s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.850778580s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317840576s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.850716591s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.317825317s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.850179672s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317825317s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880301476s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 active pruub 93.348007202s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.850170135s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.317924500s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880270004s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348007202s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880525589s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 active pruub 93.348281860s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.850155830s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317924500s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928297997s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.037651062s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.865748405s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 95.975204468s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.865737915s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.975204468s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.865626335s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 95.975189209s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.865614891s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.975189209s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880499840s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348281860s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880378723s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 active pruub 93.348236084s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880367279s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348236084s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880339622s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 active pruub 93.348297119s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.849840164s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.317825317s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880279541s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 active pruub 93.348289490s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.849815369s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317825317s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880253792s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348289490s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880223274s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348297119s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.849054337s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.317298889s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.849026680s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317298889s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848933220s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.317253113s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880013466s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 active pruub 93.348335266s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848918915s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317253113s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879985809s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348335266s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880131721s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 active pruub 93.348648071s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880120277s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348648071s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848572731s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.317153931s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848423004s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.317047119s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848547935s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317153931s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880009651s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 active pruub 93.348655701s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879998207s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348655701s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848355293s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317047119s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879922867s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 active pruub 93.348686218s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848220825s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.317001343s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879914284s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348686218s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848196983s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317001343s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879804611s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 active pruub 93.348670959s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848138809s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.317008972s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879792213s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348670959s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848105431s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317008972s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848002434s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.316963196s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847991943s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316963196s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879694939s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 active pruub 93.348716736s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879670143s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348716736s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879558563s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 active pruub 93.348678589s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879547119s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348678589s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847544670s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.316764832s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879440308s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 active pruub 93.348686218s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879426003s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348686218s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847513199s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316764832s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847468376s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.316741943s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847443581s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316741943s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847066879s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.316398621s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847057343s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316398621s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.846938133s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.316390991s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.846894264s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.316352844s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.846924782s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316390991s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879432678s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 active pruub 93.348930359s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.846867561s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316352844s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879405975s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348930359s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.843698502s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.313354492s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879385948s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 active pruub 93.349082947s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.843666077s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.313354492s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.846693993s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.316413879s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879362106s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.349082947s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.846668243s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316413879s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879122734s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 active pruub 93.348976135s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879112244s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348976135s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847078323s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.316993713s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879151344s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 active pruub 93.349098206s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847055435s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316993713s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879140854s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.349098206s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.16( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.11( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.11( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.15( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.e( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.5( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.7( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[3.17( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[7.13( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[5.1e( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[2.19( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[2.18( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[2.16( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.8( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[3.15( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[3.12( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[2.13( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[5.14( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[3.f( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.1d( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[2.11( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.1e( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[5.15( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[2.f( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[7.9( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[3.c( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[7.f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[7.6( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[5.7( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[5.5( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[2.2( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[5.4( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[5.3( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[5.2( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[7.4( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[3.1( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[3.3( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[3.6( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[7.3( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[3.9( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[3.a( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[7.1f( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[2.8( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[2.b( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[2.1d( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[2.1c( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[2.1f( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[3.1b( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[7.18( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[7.1b( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 41 pg[3.1f( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928034782s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 active pruub 92.037666321s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928025246s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.037666321s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.9( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.927963257s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 active pruub 92.037696838s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.9( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.927952766s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.037696838s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.865342140s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 95.975158691s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.865333557s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.975158691s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.927793503s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 active pruub 92.037719727s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.927783012s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.037719727s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.f( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.927693367s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 active pruub 92.037719727s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.f( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.927684784s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.037719727s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.6( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.927641869s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 active pruub 92.037757874s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.6( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.927632332s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.037757874s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.4( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.929178238s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 active pruub 92.039398193s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.4( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.929169655s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039398193s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.864471436s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 95.974800110s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.864461899s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974800110s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.927361488s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 active pruub 92.037757874s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.927351952s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.037757874s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.864676476s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 95.975151062s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.864668846s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.975151062s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.864478111s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 95.975067139s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.864466667s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.975067139s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928797722s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 active pruub 92.039489746s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928786278s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039489746s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.864211082s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 95.974967957s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.864202499s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974967957s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928587914s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 active pruub 92.039421082s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928578377s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039421082s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863828659s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 95.974784851s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863812447s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974784851s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928451538s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 active pruub 92.039451599s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928428650s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039451599s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863724709s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 95.974784851s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863713264s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974784851s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928549767s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 active pruub 92.039695740s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928540230s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039695740s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863598824s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 95.974769592s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863583565s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974769592s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863542557s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 95.974784851s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863531113s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974784851s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928400040s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 active pruub 92.039718628s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928479195s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 active pruub 92.039802551s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928391457s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039718628s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928464890s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039802551s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.858943939s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 95.970329285s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.858925819s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.970329285s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928402901s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 active pruub 92.039848328s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928389549s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039848328s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863302231s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 95.974792480s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863292694s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974792480s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.858806610s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 95.970382690s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928353310s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 active pruub 92.039932251s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.858794212s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.970382690s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928339958s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039932251s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928306580s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 active pruub 92.039985657s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.858775139s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 95.970458984s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928293228s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039985657s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.858759880s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.970458984s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.17( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.11( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.15( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.12( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.13( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.16( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.d( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.9( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.7( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.3( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.4( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.5( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.1( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.6( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.9( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.a( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.1a( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:37 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.18( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:38 np0005544708 systemd[76584]: Starting Mark boot as successful...
Dec  3 16:10:38 np0005544708 systemd[76584]: Finished Mark boot as successful.
Dec  3 16:10:38 np0005544708 systemd-logind[787]: Session 33 logged out. Waiting for processes to exit.
Dec  3 16:10:38 np0005544708 systemd[1]: session-33.scope: Deactivated successfully.
Dec  3 16:10:38 np0005544708 systemd[1]: session-33.scope: Consumed 8.847s CPU time.
Dec  3 16:10:38 np0005544708 systemd-logind[787]: Removed session 33.
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Dec  3 16:10:38 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Dec  3 16:10:38 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 16:10:38 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 16:10:38 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Dec  3 16:10:38 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 16:10:38 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 16:10:38 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  3 16:10:38 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Dec  3 16:10:38 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Dec  3 16:10:38 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 42 pg[3.1f( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 42 pg[2.13( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 42 pg[2.11( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 42 pg[5.14( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 42 pg[3.12( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 42 pg[5.15( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.d( v 31'39 lc 31'13 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.1( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.12( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.10( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.4( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.b( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 42 pg[3.15( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 42 pg[7.1b( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 42 pg[3.17( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.f( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.f( v 31'39 lc 31'1 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.9( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 42 pg[3.9( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 42 pg[2.8( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 42 pg[3.a( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 42 pg[7.13( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 42 pg[2.b( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 42 pg[3.6( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 42 pg[7.f( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 42 pg[7.3( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 42 pg[2.1f( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 42 pg[2.16( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 42 pg[3.3( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 42 pg[2.f( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 42 pg[5.5( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 42 pg[2.2( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 42 pg[2.1c( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 42 pg[7.6( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 42 pg[5.3( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.7( v 31'39 lc 31'21 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.7( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.5( v 31'39 lc 31'11 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.3( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.1d( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 42 pg[7.9( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 42 pg[5.4( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 42 pg[7.18( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 42 pg[3.1( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 42 pg[3.c( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 42 pg[5.7( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 42 pg[7.4( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 42 pg[3.1b( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 42 pg[7.1f( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 42 pg[5.2( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 42 pg[3.f( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 42 pg[5.1e( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 42 pg[2.18( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 42 pg[2.1d( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:38 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 42 pg[2.19( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:39 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v102: 177 pgs: 177 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:10:39 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0)
Dec  3 16:10:39 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Dec  3 16:10:39 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Dec  3 16:10:39 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Dec  3 16:10:39 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Dec  3 16:10:39 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Dec  3 16:10:39 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Dec  3 16:10:39 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 43 pg[6.e( v 31'39 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.759346962s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=31'39 lcod 0'0 active pruub 102.589576721s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:39 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 43 pg[6.2( v 31'39 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.759279251s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=31'39 lcod 0'0 active pruub 102.589569092s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:39 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 43 pg[6.e( v 31'39 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.759297371s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=31'39 lcod 0'0 unknown NOTIFY pruub 102.589576721s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:39 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 43 pg[6.2( v 31'39 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.759241104s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=31'39 lcod 0'0 unknown NOTIFY pruub 102.589569092s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:39 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 43 pg[6.6( v 31'39 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.759024620s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=31'39 lcod 0'0 active pruub 102.589576721s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:39 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 43 pg[6.a( v 31'39 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.753028870s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=31'39 lcod 0'0 active pruub 102.583694458s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:39 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 43 pg[6.6( v 31'39 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.758980751s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=31'39 lcod 0'0 unknown NOTIFY pruub 102.589576721s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:39 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 43 pg[6.a( v 31'39 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.752966881s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=31'39 lcod 0'0 unknown NOTIFY pruub 102.583694458s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:39 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.e( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:39 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.2( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:39 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.6( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:39 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.a( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:40 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Dec  3 16:10:40 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Dec  3 16:10:40 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 4.b scrub starts
Dec  3 16:10:40 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 4.b scrub ok
Dec  3 16:10:40 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Dec  3 16:10:40 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Dec  3 16:10:40 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Dec  3 16:10:40 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Dec  3 16:10:40 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.a( v 31'39 (0'0,31'39] local-lis/les=43/44 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:40 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.6( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=43/44 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:40 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.e( v 31'39 lc 31'19 (0'0,31'39] local-lis/les=43/44 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:40 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.2( v 31'39 (0'0,31'39] local-lis/les=43/44 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:41 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v105: 177 pgs: 4 peering, 173 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 201 B/s, 2 keys/s, 2 objects/s recovering
Dec  3 16:10:41 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Dec  3 16:10:41 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Dec  3 16:10:41 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Dec  3 16:10:41 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Dec  3 16:10:41 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:10:42 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Dec  3 16:10:42 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Dec  3 16:10:42 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Dec  3 16:10:42 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Dec  3 16:10:43 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v106: 177 pgs: 4 peering, 173 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 145 B/s, 1 keys/s, 1 objects/s recovering
Dec  3 16:10:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Dec  3 16:10:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Dec  3 16:10:43 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Dec  3 16:10:43 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Dec  3 16:10:44 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Dec  3 16:10:44 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Dec  3 16:10:45 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v107: 177 pgs: 4 peering, 173 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 123 B/s, 1 keys/s, 1 objects/s recovering
Dec  3 16:10:45 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Dec  3 16:10:45 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Dec  3 16:10:45 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Dec  3 16:10:45 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Dec  3 16:10:46 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Dec  3 16:10:46 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Dec  3 16:10:46 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:10:47 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v108: 177 pgs: 177 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 100 B/s, 1 keys/s, 1 objects/s recovering
Dec  3 16:10:47 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0)
Dec  3 16:10:47 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Dec  3 16:10:47 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Dec  3 16:10:47 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Dec  3 16:10:47 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Dec  3 16:10:47 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Dec  3 16:10:47 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Dec  3 16:10:47 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Dec  3 16:10:47 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Dec  3 16:10:47 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Dec  3 16:10:47 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.b( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=14.897034645s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 active pruub 108.124214172s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:47 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.b( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=14.896965027s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 unknown NOTIFY pruub 108.124214172s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:47 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.7( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=14.900068283s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 active pruub 108.127418518s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:47 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.7( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=14.900019646s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 unknown NOTIFY pruub 108.127418518s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:47 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.3( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=14.900277138s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 active pruub 108.127799988s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:47 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.3( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=14.900236130s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 unknown NOTIFY pruub 108.127799988s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:47 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.f( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=14.896610260s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 active pruub 108.124343872s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:47 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.f( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=14.896554947s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 unknown NOTIFY pruub 108.124343872s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:47 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Dec  3 16:10:47 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 45 pg[6.7( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:47 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:47 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 45 pg[6.3( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:47 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:48 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Dec  3 16:10:48 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Dec  3 16:10:48 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 4.c scrub starts
Dec  3 16:10:48 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 4.c scrub ok
Dec  3 16:10:48 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Dec  3 16:10:48 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Dec  3 16:10:48 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Dec  3 16:10:48 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Dec  3 16:10:48 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 46 pg[6.b( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=45/46 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=31'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:48 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 46 pg[6.f( v 31'39 lc 31'1 (0'0,31'39] local-lis/les=45/46 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:48 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 46 pg[6.7( v 31'39 lc 31'21 (0'0,31'39] local-lis/les=45/46 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:48 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 46 pg[6.3( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=45/46 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=31'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:49 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v111: 177 pgs: 177 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec  3 16:10:49 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0)
Dec  3 16:10:49 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Dec  3 16:10:49 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Dec  3 16:10:49 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Dec  3 16:10:49 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Dec  3 16:10:49 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Dec  3 16:10:49 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Dec  3 16:10:50 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Dec  3 16:10:50 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Dec  3 16:10:50 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Dec  3 16:10:50 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Dec  3 16:10:50 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Dec  3 16:10:50 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Dec  3 16:10:50 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Dec  3 16:10:51 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Dec  3 16:10:51 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Dec  3 16:10:51 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v113: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 107 B/s, 1 keys/s, 1 objects/s recovering
Dec  3 16:10:51 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0)
Dec  3 16:10:51 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Dec  3 16:10:51 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:10:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:10:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:10:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:10:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:10:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:10:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:10:51 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Dec  3 16:10:51 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Dec  3 16:10:51 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Dec  3 16:10:51 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Dec  3 16:10:51 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.5( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48 pruub=10.850582123s) [0] r=-1 lpr=48 pi=[41,48)/1 crt=31'39 active pruub 108.127700806s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:51 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.d( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48 pruub=10.846266747s) [0] r=-1 lpr=48 pi=[41,48)/1 crt=31'39 active pruub 108.123405457s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:51 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.5( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48 pruub=10.850494385s) [0] r=-1 lpr=48 pi=[41,48)/1 crt=31'39 unknown NOTIFY pruub 108.127700806s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:51 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.d( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48 pruub=10.846174240s) [0] r=-1 lpr=48 pi=[41,48)/1 crt=31'39 unknown NOTIFY pruub 108.123405457s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:51 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Dec  3 16:10:51 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 48 pg[6.5( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48) [0] r=0 lpr=48 pi=[41,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:51 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 48 pg[6.d( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48) [0] r=0 lpr=48 pi=[41,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:51 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 47 pg[6.c( v 31'39 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47 pruub=8.712948799s) [1] r=-1 lpr=47 pi=[37,47)/1 crt=31'39 lcod 0'0 active pruub 110.589859009s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:51 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 48 pg[6.c( v 31'39 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47 pruub=8.712782860s) [1] r=-1 lpr=47 pi=[37,47)/1 crt=31'39 lcod 0'0 unknown NOTIFY pruub 110.589859009s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:51 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.c( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:51 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 47 pg[6.4( v 31'39 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47 pruub=8.706089020s) [1] r=-1 lpr=47 pi=[37,47)/1 crt=31'39 lcod 0'0 active pruub 110.583999634s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:51 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 48 pg[6.4( v 31'39 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47 pruub=8.705857277s) [1] r=-1 lpr=47 pi=[37,47)/1 crt=31'39 lcod 0'0 unknown NOTIFY pruub 110.583999634s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:51 np0005544708 ceph-mgr[75500]: [progress INFO root] Completed event a9d95566-a29f-4277-9ca9-f7ef602a478d (Global Recovery Event) in 25 seconds
Dec  3 16:10:51 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.4( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:52 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Dec  3 16:10:52 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Dec  3 16:10:52 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Dec  3 16:10:52 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Dec  3 16:10:52 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Dec  3 16:10:52 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 49 pg[6.d( v 31'39 lc 31'13 (0'0,31'39] local-lis/les=48/49 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48) [0] r=0 lpr=48 pi=[41,48)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:52 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Dec  3 16:10:52 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 49 pg[6.5( v 31'39 lc 31'11 (0'0,31'39] local-lis/les=48/49 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48) [0] r=0 lpr=48 pi=[41,48)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:52 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.4( v 31'39 lc 31'15 (0'0,31'39] local-lis/les=47/49 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:52 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.c( v 31'39 lc 31'17 (0'0,31'39] local-lis/les=47/49 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:53 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v116: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 147 B/s, 2 keys/s, 1 objects/s recovering
Dec  3 16:10:53 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0)
Dec  3 16:10:53 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Dec  3 16:10:53 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Dec  3 16:10:53 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Dec  3 16:10:53 np0005544708 systemd-logind[787]: New session 34 of user zuul.
Dec  3 16:10:53 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Dec  3 16:10:53 np0005544708 systemd[1]: Started Session 34 of User zuul.
Dec  3 16:10:53 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Dec  3 16:10:53 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Dec  3 16:10:53 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Dec  3 16:10:53 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Dec  3 16:10:54 np0005544708 python3.9[96543]: ansible-ansible.legacy.ping Invoked with data=pong
Dec  3 16:10:54 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Dec  3 16:10:55 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.e scrub starts
Dec  3 16:10:55 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.e scrub ok
Dec  3 16:10:55 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v118: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 120 B/s, 1 keys/s, 1 objects/s recovering
Dec  3 16:10:55 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0)
Dec  3 16:10:55 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Dec  3 16:10:55 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Dec  3 16:10:55 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Dec  3 16:10:55 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Dec  3 16:10:55 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Dec  3 16:10:55 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Dec  3 16:10:55 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Dec  3 16:10:55 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Dec  3 16:10:56 np0005544708 python3.9[96717]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 16:10:56 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Dec  3 16:10:56 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Dec  3 16:10:56 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:10:56 np0005544708 ceph-mgr[75500]: [progress INFO root] Writing back 11 completed events
Dec  3 16:10:56 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  3 16:10:56 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:10:56 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Dec  3 16:10:56 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:10:57 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v120: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 331 B/s, 1 objects/s recovering
Dec  3 16:10:57 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0)
Dec  3 16:10:57 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Dec  3 16:10:57 np0005544708 python3.9[96873]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:10:57 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Dec  3 16:10:57 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Dec  3 16:10:57 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Dec  3 16:10:57 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Dec  3 16:10:57 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Dec  3 16:10:58 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Dec  3 16:10:58 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Dec  3 16:10:58 np0005544708 python3.9[97026]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:10:58 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Dec  3 16:10:59 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 52 pg[6.8( v 31'39 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=52 pruub=9.355226517s) [2] r=-1 lpr=52 pi=[37,52)/1 crt=31'39 lcod 0'0 active pruub 118.589836121s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:10:59 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 52 pg[6.8( v 31'39 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=52 pruub=9.355180740s) [2] r=-1 lpr=52 pi=[37,52)/1 crt=31'39 lcod 0'0 unknown NOTIFY pruub 118.589836121s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:10:59 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 52 pg[6.8( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:10:59 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v122: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 296 B/s, 1 objects/s recovering
Dec  3 16:10:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0)
Dec  3 16:10:59 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Dec  3 16:10:59 np0005544708 python3.9[97180]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:10:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Dec  3 16:10:59 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Dec  3 16:10:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Dec  3 16:10:59 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Dec  3 16:10:59 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 53 pg[6.8( v 31'39 (0'0,31'39] local-lis/les=52/53 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:10:59 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Dec  3 16:11:00 np0005544708 python3.9[97332]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:11:00 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Dec  3 16:11:00 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Dec  3 16:11:00 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Dec  3 16:11:01 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v124: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 296 B/s, 1 objects/s recovering
Dec  3 16:11:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0)
Dec  3 16:11:01 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Dec  3 16:11:01 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Dec  3 16:11:01 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Dec  3 16:11:01 np0005544708 python3.9[97482]: ansible-ansible.builtin.service_facts Invoked
Dec  3 16:11:01 np0005544708 network[97499]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  3 16:11:01 np0005544708 network[97500]: 'network-scripts' will be removed from distribution in near future.
Dec  3 16:11:01 np0005544708 network[97501]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  3 16:11:01 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 53 pg[6.9( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.090121269s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=31'39 lcod 0'0 active pruub 116.124732971s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:11:01 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 53 pg[6.9( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.089967728s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=31'39 lcod 0'0 unknown NOTIFY pruub 116.124732971s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:11:01 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 53 pg[6.9( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:11:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:11:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Dec  3 16:11:01 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Dec  3 16:11:01 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Dec  3 16:11:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Dec  3 16:11:01 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Dec  3 16:11:01 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 54 pg[6.a( v 31'39 (0'0,31'39] local-lis/les=43/44 n=1 ec=37/21 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.742419243s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=31'39 lcod 0'0 active pruub 118.168533325s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:11:01 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 54 pg[6.a( v 31'39 (0'0,31'39] local-lis/les=43/44 n=1 ec=37/21 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.742380142s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=31'39 lcod 0'0 unknown NOTIFY pruub 118.168533325s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:11:01 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 54 pg[6.a( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:11:01 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 54 pg[6.9( v 31'39 (0'0,31'39] local-lis/les=53/54 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:11:02 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Dec  3 16:11:02 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Dec  3 16:11:02 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Dec  3 16:11:02 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Dec  3 16:11:02 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Dec  3 16:11:02 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Dec  3 16:11:02 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 55 pg[6.a( v 31'39 (0'0,31'39] local-lis/les=54/55 n=1 ec=37/21 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:11:03 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v127: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:11:03 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0)
Dec  3 16:11:03 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Dec  3 16:11:03 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Dec  3 16:11:03 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Dec  3 16:11:03 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Dec  3 16:11:03 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Dec  3 16:11:03 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Dec  3 16:11:04 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.a scrub starts
Dec  3 16:11:04 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.a scrub ok
Dec  3 16:11:04 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Dec  3 16:11:04 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Dec  3 16:11:04 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 56 pg[6.b( v 31'39 (0'0,31'39] local-lis/les=45/46 n=1 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=8.310461044s) [1] r=-1 lpr=56 pi=[45,56)/1 crt=31'39 active pruub 122.845947266s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:11:04 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 56 pg[6.b( v 31'39 (0'0,31'39] local-lis/les=45/46 n=1 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=8.310279846s) [1] r=-1 lpr=56 pi=[45,56)/1 crt=31'39 unknown NOTIFY pruub 122.845947266s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:11:04 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 56 pg[6.b( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:11:04 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Dec  3 16:11:04 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Dec  3 16:11:04 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Dec  3 16:11:05 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Dec  3 16:11:05 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 57 pg[6.b( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=56/57 n=1 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=31'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:11:05 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.b scrub starts
Dec  3 16:11:05 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.b scrub ok
Dec  3 16:11:05 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v130: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:11:05 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0)
Dec  3 16:11:05 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Dec  3 16:11:06 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Dec  3 16:11:06 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Dec  3 16:11:06 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Dec  3 16:11:06 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Dec  3 16:11:06 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Dec  3 16:11:06 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.c scrub starts
Dec  3 16:11:06 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.c scrub ok
Dec  3 16:11:06 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Dec  3 16:11:06 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Dec  3 16:11:06 np0005544708 python3.9[97761]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:11:06 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:11:07 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Dec  3 16:11:07 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v132: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec  3 16:11:07 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0)
Dec  3 16:11:07 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Dec  3 16:11:07 np0005544708 python3.9[97911]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 16:11:08 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Dec  3 16:11:08 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Dec  3 16:11:08 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Dec  3 16:11:08 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Dec  3 16:11:08 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Dec  3 16:11:08 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Dec  3 16:11:08 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Dec  3 16:11:08 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Dec  3 16:11:08 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Dec  3 16:11:09 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Dec  3 16:11:09 np0005544708 python3.9[98065]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 16:11:09 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Dec  3 16:11:09 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Dec  3 16:11:09 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 59 pg[6.d( v 31'39 (0'0,31'39] local-lis/les=48/49 n=1 ec=37/21 lis/c=48/48 les/c/f=49/49/0 sis=59 pruub=15.654327393s) [1] r=-1 lpr=59 pi=[48,59)/1 crt=31'39 active pruub 134.886367798s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:11:09 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 59 pg[6.d( v 31'39 (0'0,31'39] local-lis/les=48/49 n=1 ec=37/21 lis/c=48/48 les/c/f=49/49/0 sis=59 pruub=15.654233932s) [1] r=-1 lpr=59 pi=[48,59)/1 crt=31'39 unknown NOTIFY pruub 134.886367798s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:11:09 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 59 pg[6.d( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:11:09 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v134: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec  3 16:11:09 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0)
Dec  3 16:11:09 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Dec  3 16:11:09 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.b scrub starts
Dec  3 16:11:09 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.b scrub ok
Dec  3 16:11:09 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Dec  3 16:11:09 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Dec  3 16:11:10 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Dec  3 16:11:10 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Dec  3 16:11:10 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Dec  3 16:11:10 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Dec  3 16:11:10 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Dec  3 16:11:10 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Dec  3 16:11:10 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 60 pg[6.d( v 31'39 lc 31'13 (0'0,31'39] local-lis/les=59/60 n=1 ec=37/21 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:11:10 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Dec  3 16:11:10 np0005544708 python3.9[98223]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 16:11:10 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.d scrub starts
Dec  3 16:11:10 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.d scrub ok
Dec  3 16:11:10 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Dec  3 16:11:10 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Dec  3 16:11:11 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Dec  3 16:11:11 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Dec  3 16:11:11 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Dec  3 16:11:11 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v136: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 11 B/s, 0 objects/s recovering
Dec  3 16:11:11 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0)
Dec  3 16:11:11 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Dec  3 16:11:11 np0005544708 python3.9[98357]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 16:11:11 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:11:11 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:11:11 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  3 16:11:11 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:11:11 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  3 16:11:11 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:11:11 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  3 16:11:11 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  3 16:11:11 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  3 16:11:11 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:11:11 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:11:11 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:11:11 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:11:12 np0005544708 podman[98461]: 2025-12-03 21:11:12.146527373 +0000 UTC m=+0.028518753 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:11:12 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.b scrub starts
Dec  3 16:11:12 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.b scrub ok
Dec  3 16:11:12 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Dec  3 16:11:12 np0005544708 podman[98461]: 2025-12-03 21:11:12.383750331 +0000 UTC m=+0.265741721 container create 864a593e9800f142def0403a4fc1c2c018cfc3b2c5ab944af812ada680a9b7f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec  3 16:11:12 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Dec  3 16:11:12 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:11:12 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:11:12 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:11:12 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Dec  3 16:11:12 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Dec  3 16:11:12 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Dec  3 16:11:12 np0005544708 systemd[1]: Started libpod-conmon-864a593e9800f142def0403a4fc1c2c018cfc3b2c5ab944af812ada680a9b7f0.scope.
Dec  3 16:11:12 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:11:12 np0005544708 podman[98461]: 2025-12-03 21:11:12.497198663 +0000 UTC m=+0.379190023 container init 864a593e9800f142def0403a4fc1c2c018cfc3b2c5ab944af812ada680a9b7f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cartwright, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:11:12 np0005544708 podman[98461]: 2025-12-03 21:11:12.509120391 +0000 UTC m=+0.391111721 container start 864a593e9800f142def0403a4fc1c2c018cfc3b2c5ab944af812ada680a9b7f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cartwright, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec  3 16:11:12 np0005544708 podman[98461]: 2025-12-03 21:11:12.51318999 +0000 UTC m=+0.395181330 container attach 864a593e9800f142def0403a4fc1c2c018cfc3b2c5ab944af812ada680a9b7f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cartwright, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  3 16:11:12 np0005544708 loving_cartwright[98477]: 167 167
Dec  3 16:11:12 np0005544708 systemd[1]: libpod-864a593e9800f142def0403a4fc1c2c018cfc3b2c5ab944af812ada680a9b7f0.scope: Deactivated successfully.
Dec  3 16:11:12 np0005544708 podman[98461]: 2025-12-03 21:11:12.519237481 +0000 UTC m=+0.401228831 container died 864a593e9800f142def0403a4fc1c2c018cfc3b2c5ab944af812ada680a9b7f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cartwright, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  3 16:11:12 np0005544708 systemd[1]: var-lib-containers-storage-overlay-e910c4735405106ee38c599d905321d080d5a43b724138f5860b7aed94bde7f7-merged.mount: Deactivated successfully.
Dec  3 16:11:12 np0005544708 podman[98461]: 2025-12-03 21:11:12.569641888 +0000 UTC m=+0.451633238 container remove 864a593e9800f142def0403a4fc1c2c018cfc3b2c5ab944af812ada680a9b7f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_cartwright, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec  3 16:11:12 np0005544708 systemd[1]: libpod-conmon-864a593e9800f142def0403a4fc1c2c018cfc3b2c5ab944af812ada680a9b7f0.scope: Deactivated successfully.
Dec  3 16:11:12 np0005544708 podman[98507]: 2025-12-03 21:11:12.760241811 +0000 UTC m=+0.057309892 container create 6ac9b58abc007d1f0d03fa834422483f2327593202c65900aa4bd17c520587ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec  3 16:11:12 np0005544708 systemd[1]: Started libpod-conmon-6ac9b58abc007d1f0d03fa834422483f2327593202c65900aa4bd17c520587ff.scope.
Dec  3 16:11:12 np0005544708 podman[98507]: 2025-12-03 21:11:12.728206015 +0000 UTC m=+0.025274196 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:11:12 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:11:12 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/636d53a40c06d2edca71bdb9e21cd9e67576272fa20bb585f01d12f9017dd7db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:11:12 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/636d53a40c06d2edca71bdb9e21cd9e67576272fa20bb585f01d12f9017dd7db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:11:12 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/636d53a40c06d2edca71bdb9e21cd9e67576272fa20bb585f01d12f9017dd7db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:11:12 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/636d53a40c06d2edca71bdb9e21cd9e67576272fa20bb585f01d12f9017dd7db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:11:12 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/636d53a40c06d2edca71bdb9e21cd9e67576272fa20bb585f01d12f9017dd7db/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:11:12 np0005544708 podman[98507]: 2025-12-03 21:11:12.871260318 +0000 UTC m=+0.168328469 container init 6ac9b58abc007d1f0d03fa834422483f2327593202c65900aa4bd17c520587ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_wing, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:11:12 np0005544708 podman[98507]: 2025-12-03 21:11:12.880500004 +0000 UTC m=+0.177568085 container start 6ac9b58abc007d1f0d03fa834422483f2327593202c65900aa4bd17c520587ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Dec  3 16:11:12 np0005544708 podman[98507]: 2025-12-03 21:11:12.896698717 +0000 UTC m=+0.193766838 container attach 6ac9b58abc007d1f0d03fa834422483f2327593202c65900aa4bd17c520587ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_wing, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 16:11:13 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v138: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 11 B/s, 0 objects/s recovering
Dec  3 16:11:13 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Dec  3 16:11:13 np0005544708 xenodochial_wing[98527]: --> passed data devices: 0 physical, 3 LVM
Dec  3 16:11:13 np0005544708 xenodochial_wing[98527]: --> All data devices are unavailable
Dec  3 16:11:13 np0005544708 systemd[1]: libpod-6ac9b58abc007d1f0d03fa834422483f2327593202c65900aa4bd17c520587ff.scope: Deactivated successfully.
Dec  3 16:11:13 np0005544708 podman[98507]: 2025-12-03 21:11:13.470752686 +0000 UTC m=+0.767820777 container died 6ac9b58abc007d1f0d03fa834422483f2327593202c65900aa4bd17c520587ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True)
Dec  3 16:11:13 np0005544708 systemd[1]: var-lib-containers-storage-overlay-636d53a40c06d2edca71bdb9e21cd9e67576272fa20bb585f01d12f9017dd7db-merged.mount: Deactivated successfully.
Dec  3 16:11:13 np0005544708 podman[98507]: 2025-12-03 21:11:13.516503608 +0000 UTC m=+0.813571689 container remove 6ac9b58abc007d1f0d03fa834422483f2327593202c65900aa4bd17c520587ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_wing, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec  3 16:11:13 np0005544708 systemd[1]: libpod-conmon-6ac9b58abc007d1f0d03fa834422483f2327593202c65900aa4bd17c520587ff.scope: Deactivated successfully.
Dec  3 16:11:13 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 61 pg[6.f( v 31'39 (0'0,31'39] local-lis/les=45/46 n=1 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=61 pruub=15.116504669s) [2] r=-1 lpr=61 pi=[45,61)/1 crt=31'39 active pruub 138.846450806s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:11:13 np0005544708 ceph-osd[86059]: osd.0 pg_epoch: 61 pg[6.f( v 31'39 (0'0,31'39] local-lis/les=45/46 n=1 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=61 pruub=15.116334915s) [2] r=-1 lpr=61 pi=[45,61)/1 crt=31'39 unknown NOTIFY pruub 138.846450806s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:11:13 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 61 pg[6.f( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=61) [2] r=0 lpr=61 pi=[45,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:11:14 np0005544708 podman[98654]: 2025-12-03 21:11:14.027669147 +0000 UTC m=+0.055947406 container create 83a450290d6027a44f1a4d3a80081baae0538379c97464d51e645e8d74e542a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_agnesi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:11:14 np0005544708 systemd[1]: Started libpod-conmon-83a450290d6027a44f1a4d3a80081baae0538379c97464d51e645e8d74e542a0.scope.
Dec  3 16:11:14 np0005544708 podman[98654]: 2025-12-03 21:11:14.008670109 +0000 UTC m=+0.036948428 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:11:14 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:11:14 np0005544708 podman[98654]: 2025-12-03 21:11:14.13032906 +0000 UTC m=+0.158607409 container init 83a450290d6027a44f1a4d3a80081baae0538379c97464d51e645e8d74e542a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_agnesi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True)
Dec  3 16:11:14 np0005544708 podman[98654]: 2025-12-03 21:11:14.13669974 +0000 UTC m=+0.164977999 container start 83a450290d6027a44f1a4d3a80081baae0538379c97464d51e645e8d74e542a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:11:14 np0005544708 podman[98654]: 2025-12-03 21:11:14.141050266 +0000 UTC m=+0.169328565 container attach 83a450290d6027a44f1a4d3a80081baae0538379c97464d51e645e8d74e542a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_agnesi, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:11:14 np0005544708 suspicious_agnesi[98673]: 167 167
Dec  3 16:11:14 np0005544708 systemd[1]: libpod-83a450290d6027a44f1a4d3a80081baae0538379c97464d51e645e8d74e542a0.scope: Deactivated successfully.
Dec  3 16:11:14 np0005544708 podman[98654]: 2025-12-03 21:11:14.144637932 +0000 UTC m=+0.172916241 container died 83a450290d6027a44f1a4d3a80081baae0538379c97464d51e645e8d74e542a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_agnesi, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:11:14 np0005544708 systemd[1]: var-lib-containers-storage-overlay-bc8b685e6854a8299dc9fee8f05cebe99944a3a923c3b67768a0ab0729a7d9f9-merged.mount: Deactivated successfully.
Dec  3 16:11:14 np0005544708 podman[98654]: 2025-12-03 21:11:14.188914195 +0000 UTC m=+0.217192494 container remove 83a450290d6027a44f1a4d3a80081baae0538379c97464d51e645e8d74e542a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_agnesi, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:11:14 np0005544708 systemd[1]: libpod-conmon-83a450290d6027a44f1a4d3a80081baae0538379c97464d51e645e8d74e542a0.scope: Deactivated successfully.
Dec  3 16:11:14 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Dec  3 16:11:14 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Dec  3 16:11:14 np0005544708 podman[98698]: 2025-12-03 21:11:14.408193614 +0000 UTC m=+0.051021824 container create ce8432a3fe2c8ee40b6a48c6e6e63ef0b1e67f7b090b43795365506f11afa2b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:11:14 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Dec  3 16:11:14 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Dec  3 16:11:14 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Dec  3 16:11:14 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 62 pg[6.f( v 31'39 lc 31'1 (0'0,31'39] local-lis/les=61/62 n=1 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=61) [2] r=0 lpr=61 pi=[45,61)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:11:14 np0005544708 systemd[1]: Started libpod-conmon-ce8432a3fe2c8ee40b6a48c6e6e63ef0b1e67f7b090b43795365506f11afa2b2.scope.
Dec  3 16:11:14 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:11:14 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d4ae90e71c93cf7f8cc69cf08c25ba5a5168b05326e7302e06d2d8613f83ce4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:11:14 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d4ae90e71c93cf7f8cc69cf08c25ba5a5168b05326e7302e06d2d8613f83ce4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:11:14 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d4ae90e71c93cf7f8cc69cf08c25ba5a5168b05326e7302e06d2d8613f83ce4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:11:14 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d4ae90e71c93cf7f8cc69cf08c25ba5a5168b05326e7302e06d2d8613f83ce4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:11:14 np0005544708 podman[98698]: 2025-12-03 21:11:14.385105928 +0000 UTC m=+0.027934128 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:11:14 np0005544708 podman[98698]: 2025-12-03 21:11:14.491286694 +0000 UTC m=+0.134114904 container init ce8432a3fe2c8ee40b6a48c6e6e63ef0b1e67f7b090b43795365506f11afa2b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:11:14 np0005544708 podman[98698]: 2025-12-03 21:11:14.511636038 +0000 UTC m=+0.154464218 container start ce8432a3fe2c8ee40b6a48c6e6e63ef0b1e67f7b090b43795365506f11afa2b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec  3 16:11:14 np0005544708 podman[98698]: 2025-12-03 21:11:14.515159692 +0000 UTC m=+0.157987902 container attach ce8432a3fe2c8ee40b6a48c6e6e63ef0b1e67f7b090b43795365506f11afa2b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]: {
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:    "0": [
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:        {
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:            "devices": [
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "/dev/loop3"
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:            ],
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:            "lv_name": "ceph_lv0",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:            "lv_size": "21470642176",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d33bc95-baf8-481d-bc78-3b15ffd29872,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:            "lv_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:            "name": "ceph_lv0",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:            "tags": {
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.block_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.cluster_name": "ceph",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.crush_device_class": "",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.encrypted": "0",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.objectstore": "bluestore",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.osd_fsid": "4d33bc95-baf8-481d-bc78-3b15ffd29872",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.osd_id": "0",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.type": "block",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.vdo": "0",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.with_tpm": "0"
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:            },
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:            "type": "block",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:            "vg_name": "ceph_vg0"
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:        }
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:    ],
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:    "1": [
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:        {
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:            "devices": [
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "/dev/loop4"
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:            ],
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:            "lv_name": "ceph_lv1",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:            "lv_size": "21470642176",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c4086f1b-ff53-4e63-8dc0-011238d77976,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:            "lv_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:            "name": "ceph_lv1",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:            "tags": {
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.block_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.cluster_name": "ceph",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.crush_device_class": "",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.encrypted": "0",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.objectstore": "bluestore",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.osd_fsid": "c4086f1b-ff53-4e63-8dc0-011238d77976",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.osd_id": "1",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.type": "block",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.vdo": "0",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.with_tpm": "0"
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:            },
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:            "type": "block",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:            "vg_name": "ceph_vg1"
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:        }
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:    ],
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:    "2": [
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:        {
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:            "devices": [
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "/dev/loop5"
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:            ],
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:            "lv_name": "ceph_lv2",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:            "lv_size": "21470642176",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=abcd6a67-9013-4470-978f-f75da5f33cd4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:            "lv_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:            "name": "ceph_lv2",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:            "tags": {
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.block_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.cluster_name": "ceph",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.crush_device_class": "",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.encrypted": "0",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.objectstore": "bluestore",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.osd_fsid": "abcd6a67-9013-4470-978f-f75da5f33cd4",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.osd_id": "2",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.type": "block",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.vdo": "0",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:                "ceph.with_tpm": "0"
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:            },
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:            "type": "block",
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:            "vg_name": "ceph_vg2"
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:        }
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]:    ]
Dec  3 16:11:14 np0005544708 optimistic_perlman[98715]: }
Dec  3 16:11:14 np0005544708 systemd[1]: libpod-ce8432a3fe2c8ee40b6a48c6e6e63ef0b1e67f7b090b43795365506f11afa2b2.scope: Deactivated successfully.
Dec  3 16:11:14 np0005544708 podman[98698]: 2025-12-03 21:11:14.818216143 +0000 UTC m=+0.461044383 container died ce8432a3fe2c8ee40b6a48c6e6e63ef0b1e67f7b090b43795365506f11afa2b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_perlman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec  3 16:11:14 np0005544708 systemd[1]: var-lib-containers-storage-overlay-9d4ae90e71c93cf7f8cc69cf08c25ba5a5168b05326e7302e06d2d8613f83ce4-merged.mount: Deactivated successfully.
Dec  3 16:11:14 np0005544708 podman[98698]: 2025-12-03 21:11:14.875774864 +0000 UTC m=+0.518603034 container remove ce8432a3fe2c8ee40b6a48c6e6e63ef0b1e67f7b090b43795365506f11afa2b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:11:14 np0005544708 systemd[1]: libpod-conmon-ce8432a3fe2c8ee40b6a48c6e6e63ef0b1e67f7b090b43795365506f11afa2b2.scope: Deactivated successfully.
Dec  3 16:11:14 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Dec  3 16:11:14 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Dec  3 16:11:15 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v140: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 11 B/s, 0 objects/s recovering
Dec  3 16:11:16 np0005544708 podman[98812]: 2025-12-03 21:11:16.085874909 +0000 UTC m=+0.038454804 container create c3d7c29ecabc23f59fdf4df495e5324a1f65d96f8311a57ef506a5b7e14b66b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_nobel, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:11:16 np0005544708 systemd[1]: Started libpod-conmon-c3d7c29ecabc23f59fdf4df495e5324a1f65d96f8311a57ef506a5b7e14b66b1.scope.
Dec  3 16:11:16 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:11:16 np0005544708 podman[98812]: 2025-12-03 21:11:16.066921525 +0000 UTC m=+0.019501440 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:11:16 np0005544708 podman[98812]: 2025-12-03 21:11:16.175054295 +0000 UTC m=+0.127634200 container init c3d7c29ecabc23f59fdf4df495e5324a1f65d96f8311a57ef506a5b7e14b66b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_nobel, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:11:16 np0005544708 podman[98812]: 2025-12-03 21:11:16.183984572 +0000 UTC m=+0.136564497 container start c3d7c29ecabc23f59fdf4df495e5324a1f65d96f8311a57ef506a5b7e14b66b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_nobel, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  3 16:11:16 np0005544708 podman[98812]: 2025-12-03 21:11:16.18812461 +0000 UTC m=+0.140704525 container attach c3d7c29ecabc23f59fdf4df495e5324a1f65d96f8311a57ef506a5b7e14b66b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec  3 16:11:16 np0005544708 friendly_nobel[98829]: 167 167
Dec  3 16:11:16 np0005544708 systemd[1]: libpod-c3d7c29ecabc23f59fdf4df495e5324a1f65d96f8311a57ef506a5b7e14b66b1.scope: Deactivated successfully.
Dec  3 16:11:16 np0005544708 podman[98834]: 2025-12-03 21:11:16.241902832 +0000 UTC m=+0.034626254 container died c3d7c29ecabc23f59fdf4df495e5324a1f65d96f8311a57ef506a5b7e14b66b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 16:11:16 np0005544708 systemd[1]: var-lib-containers-storage-overlay-37d486d2175e775d4438a46ffd08c8e4fcb0d52ca87650a8eea337c13bf48675-merged.mount: Deactivated successfully.
Dec  3 16:11:16 np0005544708 podman[98834]: 2025-12-03 21:11:16.289772304 +0000 UTC m=+0.082495726 container remove c3d7c29ecabc23f59fdf4df495e5324a1f65d96f8311a57ef506a5b7e14b66b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Dec  3 16:11:16 np0005544708 systemd[1]: libpod-conmon-c3d7c29ecabc23f59fdf4df495e5324a1f65d96f8311a57ef506a5b7e14b66b1.scope: Deactivated successfully.
Dec  3 16:11:16 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Dec  3 16:11:16 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Dec  3 16:11:16 np0005544708 podman[98856]: 2025-12-03 21:11:16.480761211 +0000 UTC m=+0.038595388 container create 1c2a95f45fcdb5b135d26477557c4bda3d5d988eb739135f28901398598e3c6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_bohr, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  3 16:11:16 np0005544708 systemd[1]: Started libpod-conmon-1c2a95f45fcdb5b135d26477557c4bda3d5d988eb739135f28901398598e3c6c.scope.
Dec  3 16:11:16 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:11:16 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ec9b31421c5409e2b16d8eff2753b50de52aa4aa55cdba301a5cfdd650e27be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:11:16 np0005544708 podman[98856]: 2025-12-03 21:11:16.465895184 +0000 UTC m=+0.023729381 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:11:16 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ec9b31421c5409e2b16d8eff2753b50de52aa4aa55cdba301a5cfdd650e27be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:11:16 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ec9b31421c5409e2b16d8eff2753b50de52aa4aa55cdba301a5cfdd650e27be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:11:16 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ec9b31421c5409e2b16d8eff2753b50de52aa4aa55cdba301a5cfdd650e27be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:11:16 np0005544708 podman[98856]: 2025-12-03 21:11:16.579472741 +0000 UTC m=+0.137306948 container init 1c2a95f45fcdb5b135d26477557c4bda3d5d988eb739135f28901398598e3c6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:11:16 np0005544708 podman[98856]: 2025-12-03 21:11:16.589372054 +0000 UTC m=+0.147206231 container start 1c2a95f45fcdb5b135d26477557c4bda3d5d988eb739135f28901398598e3c6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True)
Dec  3 16:11:16 np0005544708 podman[98856]: 2025-12-03 21:11:16.599625078 +0000 UTC m=+0.157459275 container attach 1c2a95f45fcdb5b135d26477557c4bda3d5d988eb739135f28901398598e3c6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_bohr, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec  3 16:11:16 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:11:16 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.e scrub starts
Dec  3 16:11:16 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.e scrub ok
Dec  3 16:11:17 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v141: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 96 B/s, 0 objects/s recovering
Dec  3 16:11:17 np0005544708 lvm[98953]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:11:17 np0005544708 lvm[98953]: VG ceph_vg1 finished
Dec  3 16:11:17 np0005544708 lvm[98954]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:11:17 np0005544708 lvm[98954]: VG ceph_vg2 finished
Dec  3 16:11:17 np0005544708 lvm[98950]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:11:17 np0005544708 lvm[98950]: VG ceph_vg0 finished
Dec  3 16:11:17 np0005544708 cool_bohr[98873]: {}
Dec  3 16:11:17 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Dec  3 16:11:17 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Dec  3 16:11:17 np0005544708 systemd[1]: libpod-1c2a95f45fcdb5b135d26477557c4bda3d5d988eb739135f28901398598e3c6c.scope: Deactivated successfully.
Dec  3 16:11:17 np0005544708 podman[98856]: 2025-12-03 21:11:17.425752785 +0000 UTC m=+0.983587002 container died 1c2a95f45fcdb5b135d26477557c4bda3d5d988eb739135f28901398598e3c6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_bohr, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 16:11:17 np0005544708 systemd[1]: libpod-1c2a95f45fcdb5b135d26477557c4bda3d5d988eb739135f28901398598e3c6c.scope: Consumed 1.364s CPU time.
Dec  3 16:11:17 np0005544708 systemd[1]: var-lib-containers-storage-overlay-1ec9b31421c5409e2b16d8eff2753b50de52aa4aa55cdba301a5cfdd650e27be-merged.mount: Deactivated successfully.
Dec  3 16:11:17 np0005544708 podman[98856]: 2025-12-03 21:11:17.477775026 +0000 UTC m=+1.035609223 container remove 1c2a95f45fcdb5b135d26477557c4bda3d5d988eb739135f28901398598e3c6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_bohr, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec  3 16:11:17 np0005544708 systemd[1]: libpod-conmon-1c2a95f45fcdb5b135d26477557c4bda3d5d988eb739135f28901398598e3c6c.scope: Deactivated successfully.
Dec  3 16:11:17 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:11:17 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:11:17 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:11:17 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:11:18 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:11:18 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:11:18 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Dec  3 16:11:18 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Dec  3 16:11:19 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v142: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 76 B/s, 0 objects/s recovering
Dec  3 16:11:19 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Dec  3 16:11:19 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Dec  3 16:11:20 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.d scrub starts
Dec  3 16:11:20 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.d scrub ok
Dec  3 16:11:20 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Dec  3 16:11:20 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Dec  3 16:11:20 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Dec  3 16:11:21 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Dec  3 16:11:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Optimize plan auto_2025-12-03_21:11:21
Dec  3 16:11:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 16:11:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] do_upmap
Dec  3 16:11:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] pools ['vms', 'images', 'cephfs.cephfs.data', 'backups', 'volumes', '.mgr', 'cephfs.cephfs.meta']
Dec  3 16:11:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] prepared 0/10 upmap changes
Dec  3 16:11:21 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v143: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 0 objects/s recovering
Dec  3 16:11:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:11:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:11:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:11:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:11:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:11:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 16:11:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:11:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 16:11:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:11:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:11:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:11:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:11:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:11:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:11:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:11:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:11:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:11:22 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Dec  3 16:11:22 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Dec  3 16:11:23 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v144: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 61 B/s, 0 objects/s recovering
Dec  3 16:11:24 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Dec  3 16:11:24 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Dec  3 16:11:25 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v145: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 57 B/s, 0 objects/s recovering
Dec  3 16:11:26 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Dec  3 16:11:26 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Dec  3 16:11:26 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:11:27 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v146: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 51 B/s, 0 objects/s recovering
Dec  3 16:11:27 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Dec  3 16:11:27 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Dec  3 16:11:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 16:11:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:11:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 16:11:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:11:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:11:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:11:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:11:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:11:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:11:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:11:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:11:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:11:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.7344613959254126e-06 of space, bias 4.0, pg target 0.002081353675110495 quantized to 16 (current 16)
Dec  3 16:11:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:11:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:11:28 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Dec  3 16:11:28 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Dec  3 16:11:29 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Dec  3 16:11:29 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Dec  3 16:11:29 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v147: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:11:30 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Dec  3 16:11:30 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Dec  3 16:11:31 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v148: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:11:31 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:11:32 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 4.a scrub starts
Dec  3 16:11:32 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 4.a scrub ok
Dec  3 16:11:32 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Dec  3 16:11:32 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Dec  3 16:11:33 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Dec  3 16:11:33 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Dec  3 16:11:33 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v149: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:11:34 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Dec  3 16:11:34 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Dec  3 16:11:34 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Dec  3 16:11:34 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Dec  3 16:11:35 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Dec  3 16:11:35 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Dec  3 16:11:35 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v150: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:11:35 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Dec  3 16:11:35 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Dec  3 16:11:36 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Dec  3 16:11:36 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Dec  3 16:11:36 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:11:37 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v151: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:11:39 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v152: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:11:40 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 4.e scrub starts
Dec  3 16:11:40 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 4.e scrub ok
Dec  3 16:11:41 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v153: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:11:41 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Dec  3 16:11:41 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Dec  3 16:11:41 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 7.f scrub starts
Dec  3 16:11:41 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 7.f scrub ok
Dec  3 16:11:41 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:11:42 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Dec  3 16:11:42 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Dec  3 16:11:43 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v154: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:11:44 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Dec  3 16:11:44 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Dec  3 16:11:45 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v155: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:11:45 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.d scrub starts
Dec  3 16:11:45 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.d scrub ok
Dec  3 16:11:46 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 2.f scrub starts
Dec  3 16:11:46 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 2.f scrub ok
Dec  3 16:11:46 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:11:47 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v156: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:11:47 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Dec  3 16:11:47 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Dec  3 16:11:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Dec  3 16:11:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Dec  3 16:11:49 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v157: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:11:50 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Dec  3 16:11:50 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Dec  3 16:11:51 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v158: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:11:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:11:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:11:51 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:11:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:11:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:11:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:11:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:11:52 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Dec  3 16:11:52 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Dec  3 16:11:52 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Dec  3 16:11:52 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Dec  3 16:11:53 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v159: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:11:53 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.d scrub starts
Dec  3 16:11:53 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.d scrub ok
Dec  3 16:11:53 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Dec  3 16:11:53 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Dec  3 16:11:55 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v160: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:11:55 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.a scrub starts
Dec  3 16:11:55 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.a scrub ok
Dec  3 16:11:55 np0005544708 python3.9[99224]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:11:56 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Dec  3 16:11:56 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Dec  3 16:11:56 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:11:57 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v161: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:11:57 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Dec  3 16:11:57 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Dec  3 16:11:57 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Dec  3 16:11:57 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Dec  3 16:11:57 np0005544708 python3.9[99511]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec  3 16:11:58 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Dec  3 16:11:58 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Dec  3 16:11:58 np0005544708 python3.9[99663]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec  3 16:11:59 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v162: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:11:59 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.e scrub starts
Dec  3 16:11:59 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.e scrub ok
Dec  3 16:11:59 np0005544708 python3.9[99815]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:12:00 np0005544708 python3.9[99967]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec  3 16:12:01 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v163: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:12:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:12:01 np0005544708 python3.9[100119]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:12:02 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Dec  3 16:12:02 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Dec  3 16:12:02 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Dec  3 16:12:02 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Dec  3 16:12:02 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Dec  3 16:12:02 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Dec  3 16:12:02 np0005544708 python3.9[100271]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:12:03 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v164: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:12:03 np0005544708 python3.9[100349]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:12:03 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Dec  3 16:12:03 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Dec  3 16:12:04 np0005544708 python3.9[100501]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:12:04 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Dec  3 16:12:04 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Dec  3 16:12:05 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v165: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:12:05 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Dec  3 16:12:05 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Dec  3 16:12:05 np0005544708 python3.9[100655]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec  3 16:12:06 np0005544708 python3.9[100808]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec  3 16:12:06 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:12:07 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v166: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:12:07 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Dec  3 16:12:07 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Dec  3 16:12:07 np0005544708 python3.9[100961]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  3 16:12:08 np0005544708 python3.9[101113]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec  3 16:12:09 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v167: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:12:09 np0005544708 python3.9[101265]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 16:12:09 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Dec  3 16:12:09 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Dec  3 16:12:11 np0005544708 python3.9[101418]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:12:11 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v168: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:12:11 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Dec  3 16:12:11 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Dec  3 16:12:11 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:12:11 np0005544708 python3.9[101571]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:12:12 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Dec  3 16:12:12 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Dec  3 16:12:12 np0005544708 python3.9[101649]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:12:13 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v169: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:12:13 np0005544708 python3.9[101801]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:12:13 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Dec  3 16:12:13 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Dec  3 16:12:13 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Dec  3 16:12:13 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Dec  3 16:12:13 np0005544708 python3.9[101879]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:12:14 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Dec  3 16:12:14 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Dec  3 16:12:15 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v170: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:12:15 np0005544708 python3.9[102031]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 16:12:15 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Dec  3 16:12:15 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Dec  3 16:12:16 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.c scrub starts
Dec  3 16:12:16 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.c scrub ok
Dec  3 16:12:16 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Dec  3 16:12:16 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Dec  3 16:12:16 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:12:17 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v171: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:12:17 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 3.c scrub starts
Dec  3 16:12:17 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 3.c scrub ok
Dec  3 16:12:17 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Dec  3 16:12:17 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Dec  3 16:12:17 np0005544708 python3.9[102182]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:12:18 np0005544708 python3.9[102398]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec  3 16:12:18 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:12:18 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:12:18 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  3 16:12:18 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:12:18 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  3 16:12:18 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:12:18 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  3 16:12:18 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  3 16:12:18 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  3 16:12:18 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:12:18 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:12:18 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:12:18 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Dec  3 16:12:18 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Dec  3 16:12:18 np0005544708 podman[102606]: 2025-12-03 21:12:18.805201772 +0000 UTC m=+0.049743817 container create c259561a95386af7e842b396475bac578ce8f8fb1e9db0cd3547a043bcbab860 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 16:12:18 np0005544708 systemd[1]: Started libpod-conmon-c259561a95386af7e842b396475bac578ce8f8fb1e9db0cd3547a043bcbab860.scope.
Dec  3 16:12:18 np0005544708 podman[102606]: 2025-12-03 21:12:18.782861851 +0000 UTC m=+0.027403886 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:12:18 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:12:18 np0005544708 podman[102606]: 2025-12-03 21:12:18.903703355 +0000 UTC m=+0.148245380 container init c259561a95386af7e842b396475bac578ce8f8fb1e9db0cd3547a043bcbab860 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:12:18 np0005544708 podman[102606]: 2025-12-03 21:12:18.910234943 +0000 UTC m=+0.154776948 container start c259561a95386af7e842b396475bac578ce8f8fb1e9db0cd3547a043bcbab860 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:12:18 np0005544708 podman[102606]: 2025-12-03 21:12:18.913311982 +0000 UTC m=+0.157853987 container attach c259561a95386af7e842b396475bac578ce8f8fb1e9db0cd3547a043bcbab860 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_hypatia, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:12:18 np0005544708 nervous_hypatia[102642]: 167 167
Dec  3 16:12:18 np0005544708 systemd[1]: libpod-c259561a95386af7e842b396475bac578ce8f8fb1e9db0cd3547a043bcbab860.scope: Deactivated successfully.
Dec  3 16:12:18 np0005544708 conmon[102642]: conmon c259561a95386af7e842 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c259561a95386af7e842b396475bac578ce8f8fb1e9db0cd3547a043bcbab860.scope/container/memory.events
Dec  3 16:12:18 np0005544708 podman[102606]: 2025-12-03 21:12:18.916793981 +0000 UTC m=+0.161335996 container died c259561a95386af7e842b396475bac578ce8f8fb1e9db0cd3547a043bcbab860 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:12:18 np0005544708 systemd[1]: var-lib-containers-storage-overlay-41856caa5538d07d43fc7635daa27bbcefcdd62f140e3bdc2e61377c39bd2c13-merged.mount: Deactivated successfully.
Dec  3 16:12:18 np0005544708 podman[102606]: 2025-12-03 21:12:18.953894725 +0000 UTC m=+0.198436740 container remove c259561a95386af7e842b396475bac578ce8f8fb1e9db0cd3547a043bcbab860 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_hypatia, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:12:18 np0005544708 systemd[1]: libpod-conmon-c259561a95386af7e842b396475bac578ce8f8fb1e9db0cd3547a043bcbab860.scope: Deactivated successfully.
Dec  3 16:12:19 np0005544708 python3.9[102639]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:12:19 np0005544708 podman[102687]: 2025-12-03 21:12:19.14311949 +0000 UTC m=+0.049030527 container create 9319acf5684c010917579d21408162f1ca5763723937156aa69c93abb894ebf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_edison, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 16:12:19 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:12:19 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:12:19 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:12:19 np0005544708 systemd[1]: Started libpod-conmon-9319acf5684c010917579d21408162f1ca5763723937156aa69c93abb894ebf4.scope.
Dec  3 16:12:19 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v172: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:12:19 np0005544708 podman[102687]: 2025-12-03 21:12:19.120968825 +0000 UTC m=+0.026879902 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:12:19 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:12:19 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e8dcb95bbbca97bcf11db1615a5464486cb099702bc1b27c36931006bd52186/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:12:19 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e8dcb95bbbca97bcf11db1615a5464486cb099702bc1b27c36931006bd52186/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:12:19 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e8dcb95bbbca97bcf11db1615a5464486cb099702bc1b27c36931006bd52186/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:12:19 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e8dcb95bbbca97bcf11db1615a5464486cb099702bc1b27c36931006bd52186/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:12:19 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e8dcb95bbbca97bcf11db1615a5464486cb099702bc1b27c36931006bd52186/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:12:19 np0005544708 podman[102687]: 2025-12-03 21:12:19.231003011 +0000 UTC m=+0.136914078 container init 9319acf5684c010917579d21408162f1ca5763723937156aa69c93abb894ebf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_edison, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec  3 16:12:19 np0005544708 podman[102687]: 2025-12-03 21:12:19.240990487 +0000 UTC m=+0.146901524 container start 9319acf5684c010917579d21408162f1ca5763723937156aa69c93abb894ebf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  3 16:12:19 np0005544708 podman[102687]: 2025-12-03 21:12:19.244614583 +0000 UTC m=+0.150525620 container attach 9319acf5684c010917579d21408162f1ca5763723937156aa69c93abb894ebf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_edison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Dec  3 16:12:19 np0005544708 intelligent_edison[102706]: --> passed data devices: 0 physical, 3 LVM
Dec  3 16:12:19 np0005544708 intelligent_edison[102706]: --> All data devices are unavailable
Dec  3 16:12:19 np0005544708 systemd[1]: libpod-9319acf5684c010917579d21408162f1ca5763723937156aa69c93abb894ebf4.scope: Deactivated successfully.
Dec  3 16:12:19 np0005544708 podman[102687]: 2025-12-03 21:12:19.772009709 +0000 UTC m=+0.677920746 container died 9319acf5684c010917579d21408162f1ca5763723937156aa69c93abb894ebf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_edison, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec  3 16:12:19 np0005544708 systemd[1]: var-lib-containers-storage-overlay-8e8dcb95bbbca97bcf11db1615a5464486cb099702bc1b27c36931006bd52186-merged.mount: Deactivated successfully.
Dec  3 16:12:19 np0005544708 podman[102687]: 2025-12-03 21:12:19.815543286 +0000 UTC m=+0.721454313 container remove 9319acf5684c010917579d21408162f1ca5763723937156aa69c93abb894ebf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_edison, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  3 16:12:19 np0005544708 systemd[1]: libpod-conmon-9319acf5684c010917579d21408162f1ca5763723937156aa69c93abb894ebf4.scope: Deactivated successfully.
Dec  3 16:12:20 np0005544708 python3.9[102891]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:12:20 np0005544708 podman[102928]: 2025-12-03 21:12:20.329912426 +0000 UTC m=+0.058743741 container create b2f8ad000549e2bec096e9048ad59cbf327f04ba411d710f3a0c56b6b8698819 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:12:20 np0005544708 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec  3 16:12:20 np0005544708 systemd[1]: Started libpod-conmon-b2f8ad000549e2bec096e9048ad59cbf327f04ba411d710f3a0c56b6b8698819.scope.
Dec  3 16:12:20 np0005544708 podman[102928]: 2025-12-03 21:12:20.305180239 +0000 UTC m=+0.034011544 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:12:20 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:12:20 np0005544708 podman[102928]: 2025-12-03 21:12:20.425259639 +0000 UTC m=+0.154090974 container init b2f8ad000549e2bec096e9048ad59cbf327f04ba411d710f3a0c56b6b8698819 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_rosalind, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  3 16:12:20 np0005544708 podman[102928]: 2025-12-03 21:12:20.433354663 +0000 UTC m=+0.162185958 container start b2f8ad000549e2bec096e9048ad59cbf327f04ba411d710f3a0c56b6b8698819 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_rosalind, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:12:20 np0005544708 podman[102928]: 2025-12-03 21:12:20.437464512 +0000 UTC m=+0.166295807 container attach b2f8ad000549e2bec096e9048ad59cbf327f04ba411d710f3a0c56b6b8698819 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_rosalind, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec  3 16:12:20 np0005544708 cool_rosalind[102950]: 167 167
Dec  3 16:12:20 np0005544708 systemd[1]: libpod-b2f8ad000549e2bec096e9048ad59cbf327f04ba411d710f3a0c56b6b8698819.scope: Deactivated successfully.
Dec  3 16:12:20 np0005544708 podman[102928]: 2025-12-03 21:12:20.43924645 +0000 UTC m=+0.168077745 container died b2f8ad000549e2bec096e9048ad59cbf327f04ba411d710f3a0c56b6b8698819 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  3 16:12:20 np0005544708 systemd[1]: var-lib-containers-storage-overlay-c8971bb5d4e2a5298f36d2341aebe64cfcc8ec9bab2bf2a7dadd8d2a08b2dce1-merged.mount: Deactivated successfully.
Dec  3 16:12:20 np0005544708 podman[102928]: 2025-12-03 21:12:20.497169198 +0000 UTC m=+0.226000463 container remove b2f8ad000549e2bec096e9048ad59cbf327f04ba411d710f3a0c56b6b8698819 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_rosalind, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec  3 16:12:20 np0005544708 systemd[1]: libpod-conmon-b2f8ad000549e2bec096e9048ad59cbf327f04ba411d710f3a0c56b6b8698819.scope: Deactivated successfully.
Dec  3 16:12:20 np0005544708 systemd[1]: tuned.service: Deactivated successfully.
Dec  3 16:12:20 np0005544708 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec  3 16:12:20 np0005544708 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  3 16:12:20 np0005544708 podman[102979]: 2025-12-03 21:12:20.718197698 +0000 UTC m=+0.055922136 container create 11ba17ed3cb74bae427fc989755162f00f4b2a9dfcded4fffdbcf05b19e01f5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mclean, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 16:12:20 np0005544708 systemd[1]: Started libpod-conmon-11ba17ed3cb74bae427fc989755162f00f4b2a9dfcded4fffdbcf05b19e01f5a.scope.
Dec  3 16:12:20 np0005544708 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  3 16:12:20 np0005544708 podman[102979]: 2025-12-03 21:12:20.686198318 +0000 UTC m=+0.023922776 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:12:20 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:12:20 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8edf480ccc4f5da821f4c1935c83ef5369bf21fcae9396657b2d68031f7dcca1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:12:20 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8edf480ccc4f5da821f4c1935c83ef5369bf21fcae9396657b2d68031f7dcca1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:12:20 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8edf480ccc4f5da821f4c1935c83ef5369bf21fcae9396657b2d68031f7dcca1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:12:20 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8edf480ccc4f5da821f4c1935c83ef5369bf21fcae9396657b2d68031f7dcca1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:12:20 np0005544708 podman[102979]: 2025-12-03 21:12:20.829118954 +0000 UTC m=+0.166843472 container init 11ba17ed3cb74bae427fc989755162f00f4b2a9dfcded4fffdbcf05b19e01f5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec  3 16:12:20 np0005544708 podman[102979]: 2025-12-03 21:12:20.845324305 +0000 UTC m=+0.183048763 container start 11ba17ed3cb74bae427fc989755162f00f4b2a9dfcded4fffdbcf05b19e01f5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 16:12:20 np0005544708 podman[102979]: 2025-12-03 21:12:20.930018554 +0000 UTC m=+0.267743052 container attach 11ba17ed3cb74bae427fc989755162f00f4b2a9dfcded4fffdbcf05b19e01f5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]: {
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:    "0": [
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:        {
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:            "devices": [
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "/dev/loop3"
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:            ],
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:            "lv_name": "ceph_lv0",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:            "lv_size": "21470642176",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d33bc95-baf8-481d-bc78-3b15ffd29872,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:            "lv_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:            "name": "ceph_lv0",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:            "tags": {
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.block_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.cluster_name": "ceph",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.crush_device_class": "",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.encrypted": "0",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.objectstore": "bluestore",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.osd_fsid": "4d33bc95-baf8-481d-bc78-3b15ffd29872",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.osd_id": "0",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.type": "block",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.vdo": "0",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.with_tpm": "0"
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:            },
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:            "type": "block",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:            "vg_name": "ceph_vg0"
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:        }
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:    ],
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:    "1": [
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:        {
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:            "devices": [
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "/dev/loop4"
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:            ],
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:            "lv_name": "ceph_lv1",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:            "lv_size": "21470642176",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c4086f1b-ff53-4e63-8dc0-011238d77976,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:            "lv_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:            "name": "ceph_lv1",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:            "tags": {
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.block_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.cluster_name": "ceph",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.crush_device_class": "",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.encrypted": "0",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.objectstore": "bluestore",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.osd_fsid": "c4086f1b-ff53-4e63-8dc0-011238d77976",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.osd_id": "1",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.type": "block",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.vdo": "0",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.with_tpm": "0"
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:            },
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:            "type": "block",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:            "vg_name": "ceph_vg1"
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:        }
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:    ],
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:    "2": [
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:        {
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:            "devices": [
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "/dev/loop5"
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:            ],
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:            "lv_name": "ceph_lv2",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:            "lv_size": "21470642176",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=abcd6a67-9013-4470-978f-f75da5f33cd4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:            "lv_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:            "name": "ceph_lv2",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:            "tags": {
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.block_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.cluster_name": "ceph",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.crush_device_class": "",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.encrypted": "0",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.objectstore": "bluestore",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.osd_fsid": "abcd6a67-9013-4470-978f-f75da5f33cd4",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.osd_id": "2",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.type": "block",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.vdo": "0",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:                "ceph.with_tpm": "0"
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:            },
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:            "type": "block",
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:            "vg_name": "ceph_vg2"
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:        }
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]:    ]
Dec  3 16:12:21 np0005544708 nostalgic_mclean[102999]: }
Dec  3 16:12:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Optimize plan auto_2025-12-03_21:12:21
Dec  3 16:12:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 16:12:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] do_upmap
Dec  3 16:12:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] pools ['images', 'backups', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', '.mgr']
Dec  3 16:12:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] prepared 0/10 upmap changes
Dec  3 16:12:21 np0005544708 systemd[1]: libpod-11ba17ed3cb74bae427fc989755162f00f4b2a9dfcded4fffdbcf05b19e01f5a.scope: Deactivated successfully.
Dec  3 16:12:21 np0005544708 podman[102979]: 2025-12-03 21:12:21.201964646 +0000 UTC m=+0.539689104 container died 11ba17ed3cb74bae427fc989755162f00f4b2a9dfcded4fffdbcf05b19e01f5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec  3 16:12:21 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v173: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:12:21 np0005544708 systemd[1]: var-lib-containers-storage-overlay-8edf480ccc4f5da821f4c1935c83ef5369bf21fcae9396657b2d68031f7dcca1-merged.mount: Deactivated successfully.
Dec  3 16:12:21 np0005544708 podman[102979]: 2025-12-03 21:12:21.250089284 +0000 UTC m=+0.587813702 container remove 11ba17ed3cb74bae427fc989755162f00f4b2a9dfcded4fffdbcf05b19e01f5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mclean, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:12:21 np0005544708 systemd[1]: libpod-conmon-11ba17ed3cb74bae427fc989755162f00f4b2a9dfcded4fffdbcf05b19e01f5a.scope: Deactivated successfully.
Dec  3 16:12:21 np0005544708 python3.9[103193]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec  3 16:12:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:12:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:12:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:12:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:12:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:12:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:12:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:12:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 16:12:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:12:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 16:12:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:12:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:12:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:12:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:12:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:12:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:12:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:12:21 np0005544708 podman[103258]: 2025-12-03 21:12:21.825033323 +0000 UTC m=+0.056965224 container create 8a654df33291ecdfca283720f51f17875dbba47778e55380a93399cecf3eee1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_chatelet, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec  3 16:12:21 np0005544708 systemd[1]: Started libpod-conmon-8a654df33291ecdfca283720f51f17875dbba47778e55380a93399cecf3eee1e.scope.
Dec  3 16:12:21 np0005544708 podman[103258]: 2025-12-03 21:12:21.797858552 +0000 UTC m=+0.029790473 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:12:21 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:12:21 np0005544708 podman[103258]: 2025-12-03 21:12:21.931543781 +0000 UTC m=+0.163475742 container init 8a654df33291ecdfca283720f51f17875dbba47778e55380a93399cecf3eee1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:12:21 np0005544708 podman[103258]: 2025-12-03 21:12:21.944149697 +0000 UTC m=+0.176081608 container start 8a654df33291ecdfca283720f51f17875dbba47778e55380a93399cecf3eee1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_chatelet, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:12:21 np0005544708 podman[103258]: 2025-12-03 21:12:21.947727161 +0000 UTC m=+0.179659042 container attach 8a654df33291ecdfca283720f51f17875dbba47778e55380a93399cecf3eee1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_chatelet, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  3 16:12:21 np0005544708 modest_chatelet[103274]: 167 167
Dec  3 16:12:21 np0005544708 systemd[1]: libpod-8a654df33291ecdfca283720f51f17875dbba47778e55380a93399cecf3eee1e.scope: Deactivated successfully.
Dec  3 16:12:21 np0005544708 podman[103258]: 2025-12-03 21:12:21.952382975 +0000 UTC m=+0.184314916 container died 8a654df33291ecdfca283720f51f17875dbba47778e55380a93399cecf3eee1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_chatelet, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec  3 16:12:21 np0005544708 systemd[1]: var-lib-containers-storage-overlay-8361d89ff0e0a4ecca4ea17bc885ebb22322aef83cc9a0931e2da2205335a6ea-merged.mount: Deactivated successfully.
Dec  3 16:12:22 np0005544708 podman[103258]: 2025-12-03 21:12:21.999792594 +0000 UTC m=+0.231724475 container remove 8a654df33291ecdfca283720f51f17875dbba47778e55380a93399cecf3eee1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 16:12:22 np0005544708 systemd[1]: libpod-conmon-8a654df33291ecdfca283720f51f17875dbba47778e55380a93399cecf3eee1e.scope: Deactivated successfully.
Dec  3 16:12:22 np0005544708 podman[103299]: 2025-12-03 21:12:22.217127636 +0000 UTC m=+0.050505223 container create 2818224afdf5dc2624219eefeb862ade009c4694febb48ad69987dd10ba038bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec  3 16:12:22 np0005544708 systemd[1]: Started libpod-conmon-2818224afdf5dc2624219eefeb862ade009c4694febb48ad69987dd10ba038bd.scope.
Dec  3 16:12:22 np0005544708 podman[103299]: 2025-12-03 21:12:22.197213817 +0000 UTC m=+0.030591394 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:12:22 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:12:22 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aba2d6f6a6d39b710f7f32ccf3d28f1e84c02cf77e605d36bdc1f7e6edd6d33/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:12:22 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aba2d6f6a6d39b710f7f32ccf3d28f1e84c02cf77e605d36bdc1f7e6edd6d33/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:12:22 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aba2d6f6a6d39b710f7f32ccf3d28f1e84c02cf77e605d36bdc1f7e6edd6d33/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:12:22 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aba2d6f6a6d39b710f7f32ccf3d28f1e84c02cf77e605d36bdc1f7e6edd6d33/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:12:22 np0005544708 podman[103299]: 2025-12-03 21:12:22.32422003 +0000 UTC m=+0.157597637 container init 2818224afdf5dc2624219eefeb862ade009c4694febb48ad69987dd10ba038bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:12:22 np0005544708 podman[103299]: 2025-12-03 21:12:22.339493446 +0000 UTC m=+0.172871003 container start 2818224afdf5dc2624219eefeb862ade009c4694febb48ad69987dd10ba038bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 16:12:22 np0005544708 podman[103299]: 2025-12-03 21:12:22.343844551 +0000 UTC m=+0.177222118 container attach 2818224afdf5dc2624219eefeb862ade009c4694febb48ad69987dd10ba038bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 16:12:22 np0005544708 lvm[103394]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:12:22 np0005544708 lvm[103394]: VG ceph_vg1 finished
Dec  3 16:12:22 np0005544708 lvm[103393]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:12:22 np0005544708 lvm[103393]: VG ceph_vg0 finished
Dec  3 16:12:23 np0005544708 lvm[103396]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:12:23 np0005544708 lvm[103396]: VG ceph_vg2 finished
Dec  3 16:12:23 np0005544708 dazzling_satoshi[103315]: {}
Dec  3 16:12:23 np0005544708 systemd[1]: libpod-2818224afdf5dc2624219eefeb862ade009c4694febb48ad69987dd10ba038bd.scope: Deactivated successfully.
Dec  3 16:12:23 np0005544708 podman[103299]: 2025-12-03 21:12:23.156908285 +0000 UTC m=+0.990285872 container died 2818224afdf5dc2624219eefeb862ade009c4694febb48ad69987dd10ba038bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  3 16:12:23 np0005544708 systemd[1]: libpod-2818224afdf5dc2624219eefeb862ade009c4694febb48ad69987dd10ba038bd.scope: Consumed 1.315s CPU time.
Dec  3 16:12:23 np0005544708 systemd[1]: var-lib-containers-storage-overlay-7aba2d6f6a6d39b710f7f32ccf3d28f1e84c02cf77e605d36bdc1f7e6edd6d33-merged.mount: Deactivated successfully.
Dec  3 16:12:23 np0005544708 podman[103299]: 2025-12-03 21:12:23.20871215 +0000 UTC m=+1.042089707 container remove 2818224afdf5dc2624219eefeb862ade009c4694febb48ad69987dd10ba038bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_satoshi, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:12:23 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v174: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:12:23 np0005544708 systemd[1]: libpod-conmon-2818224afdf5dc2624219eefeb862ade009c4694febb48ad69987dd10ba038bd.scope: Deactivated successfully.
Dec  3 16:12:23 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:12:23 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:12:23 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:12:23 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:12:23 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Dec  3 16:12:23 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Dec  3 16:12:23 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Dec  3 16:12:23 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Dec  3 16:12:23 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Dec  3 16:12:23 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Dec  3 16:12:24 np0005544708 python3.9[103562]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:12:24 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:12:24 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:12:25 np0005544708 python3.9[103716]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:12:25 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v175: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:12:25 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Dec  3 16:12:25 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Dec  3 16:12:25 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Dec  3 16:12:25 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Dec  3 16:12:25 np0005544708 systemd-logind[787]: Session 34 logged out. Waiting for processes to exit.
Dec  3 16:12:25 np0005544708 systemd[1]: session-34.scope: Deactivated successfully.
Dec  3 16:12:25 np0005544708 systemd[1]: session-34.scope: Consumed 1min 7.887s CPU time.
Dec  3 16:12:25 np0005544708 systemd-logind[787]: Removed session 34.
Dec  3 16:12:26 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.e scrub starts
Dec  3 16:12:26 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.e scrub ok
Dec  3 16:12:26 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.f scrub starts
Dec  3 16:12:26 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.f scrub ok
Dec  3 16:12:26 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:12:27 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v176: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:12:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 16:12:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:12:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 16:12:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:12:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:12:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:12:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:12:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:12:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:12:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:12:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:12:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:12:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.7344613959254126e-06 of space, bias 4.0, pg target 0.002081353675110495 quantized to 16 (current 16)
Dec  3 16:12:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:12:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:12:29 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v177: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:12:29 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Dec  3 16:12:29 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Dec  3 16:12:29 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Dec  3 16:12:29 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Dec  3 16:12:29 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Dec  3 16:12:29 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Dec  3 16:12:30 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Dec  3 16:12:30 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Dec  3 16:12:30 np0005544708 systemd-logind[787]: New session 35 of user zuul.
Dec  3 16:12:30 np0005544708 systemd[1]: Started Session 35 of User zuul.
Dec  3 16:12:31 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v178: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:12:31 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 3.f scrub starts
Dec  3 16:12:31 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 3.f scrub ok
Dec  3 16:12:31 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 6.f scrub starts
Dec  3 16:12:31 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 6.f scrub ok
Dec  3 16:12:31 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:12:31 np0005544708 python3.9[103896]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 16:12:32 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Dec  3 16:12:32 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Dec  3 16:12:33 np0005544708 python3.9[104052]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec  3 16:12:33 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v179: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:12:33 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Dec  3 16:12:33 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Dec  3 16:12:34 np0005544708 python3.9[104205]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 16:12:35 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v180: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:12:35 np0005544708 python3.9[104289]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  3 16:12:35 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Dec  3 16:12:35 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Dec  3 16:12:36 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Dec  3 16:12:36 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Dec  3 16:12:36 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:12:37 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v181: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:12:37 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Dec  3 16:12:37 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Dec  3 16:12:37 np0005544708 python3.9[104442]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 16:12:38 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Dec  3 16:12:38 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Dec  3 16:12:39 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v182: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:12:39 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Dec  3 16:12:39 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Dec  3 16:12:39 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Dec  3 16:12:39 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Dec  3 16:12:39 np0005544708 python3.9[104595]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  3 16:12:40 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Dec  3 16:12:40 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Dec  3 16:12:40 np0005544708 python3.9[104748]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 16:12:41 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v183: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:12:41 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Dec  3 16:12:41 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Dec  3 16:12:41 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:12:41 np0005544708 python3.9[104900]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec  3 16:12:42 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Dec  3 16:12:42 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Dec  3 16:12:43 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v184: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:12:43 np0005544708 python3.9[105050]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 16:12:43 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.d scrub starts
Dec  3 16:12:43 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.d scrub ok
Dec  3 16:12:44 np0005544708 python3.9[105208]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 16:12:44 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Dec  3 16:12:44 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Dec  3 16:12:45 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v185: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:12:46 np0005544708 python3.9[105361]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:12:46 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:12:47 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v186: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:12:47 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Dec  3 16:12:47 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Dec  3 16:12:47 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Dec  3 16:12:47 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Dec  3 16:12:48 np0005544708 python3.9[105648]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  3 16:12:48 np0005544708 python3.9[105798]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:12:49 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v187: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:12:49 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 6.a scrub starts
Dec  3 16:12:49 np0005544708 ceph-osd[86059]: log_channel(cluster) log [DBG] : 6.a scrub ok
Dec  3 16:12:49 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Dec  3 16:12:49 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Dec  3 16:12:49 np0005544708 python3.9[105952]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 16:12:51 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v188: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:12:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:12:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:12:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:12:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:12:51 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:12:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:12:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:12:51 np0005544708 python3.9[106105]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 16:12:53 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v189: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:12:53 np0005544708 python3.9[106258]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:12:54 np0005544708 python3.9[106412]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Dec  3 16:12:55 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v190: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:12:55 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Dec  3 16:12:55 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Dec  3 16:12:55 np0005544708 systemd[1]: session-35.scope: Deactivated successfully.
Dec  3 16:12:55 np0005544708 systemd[1]: session-35.scope: Consumed 19.223s CPU time.
Dec  3 16:12:55 np0005544708 systemd-logind[787]: Session 35 logged out. Waiting for processes to exit.
Dec  3 16:12:55 np0005544708 systemd-logind[787]: Removed session 35.
Dec  3 16:12:56 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:12:57 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v191: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:12:59 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v192: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:12:59 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.a scrub starts
Dec  3 16:12:59 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.a scrub ok
Dec  3 16:13:01 np0005544708 systemd-logind[787]: New session 36 of user zuul.
Dec  3 16:13:01 np0005544708 systemd[1]: Started Session 36 of User zuul.
Dec  3 16:13:01 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v193: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:13:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:13:02 np0005544708 python3.9[106591]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 16:13:03 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v194: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:13:03 np0005544708 python3.9[106745]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 16:13:04 np0005544708 python3.9[106939]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:13:04 np0005544708 systemd[1]: session-36.scope: Deactivated successfully.
Dec  3 16:13:04 np0005544708 systemd[1]: session-36.scope: Consumed 2.600s CPU time.
Dec  3 16:13:04 np0005544708 systemd-logind[787]: Session 36 logged out. Waiting for processes to exit.
Dec  3 16:13:05 np0005544708 systemd-logind[787]: Removed session 36.
Dec  3 16:13:05 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v195: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:13:06 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:13:07 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v196: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:13:09 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v197: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:13:10 np0005544708 systemd-logind[787]: New session 37 of user zuul.
Dec  3 16:13:10 np0005544708 systemd[1]: Started Session 37 of User zuul.
Dec  3 16:13:11 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v198: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:13:11 np0005544708 python3.9[107119]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 16:13:11 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Dec  3 16:13:11 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Dec  3 16:13:11 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:13:12 np0005544708 python3.9[107273]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 16:13:13 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v199: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:13:13 np0005544708 python3.9[107429]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 16:13:14 np0005544708 python3.9[107513]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 16:13:15 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v200: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:13:16 np0005544708 python3.9[107667]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 16:13:16 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Dec  3 16:13:16 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Dec  3 16:13:16 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:13:17 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v201: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:13:17 np0005544708 python3.9[107862]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:13:18 np0005544708 python3.9[108014]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:13:18 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Dec  3 16:13:18 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Dec  3 16:13:19 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v202: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:13:19 np0005544708 python3.9[108179]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:13:19 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Dec  3 16:13:19 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Dec  3 16:13:19 np0005544708 python3.9[108257]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:13:20 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Dec  3 16:13:20 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Dec  3 16:13:20 np0005544708 python3.9[108409]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:13:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Optimize plan auto_2025-12-03_21:13:21
Dec  3 16:13:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 16:13:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] do_upmap
Dec  3 16:13:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'backups', 'vms', 'volumes']
Dec  3 16:13:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] prepared 0/10 upmap changes
Dec  3 16:13:21 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v203: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:13:21 np0005544708 python3.9[108487]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:13:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:13:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:13:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:13:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:13:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:13:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:13:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:13:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 16:13:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:13:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 16:13:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:13:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:13:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:13:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:13:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:13:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:13:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:13:22 np0005544708 python3.9[108639]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:13:22 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Dec  3 16:13:22 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Dec  3 16:13:22 np0005544708 python3.9[108791]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:13:23 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v204: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:13:23 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.f scrub starts
Dec  3 16:13:23 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.f scrub ok
Dec  3 16:13:23 np0005544708 python3.9[108966]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:13:24 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:13:24 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:13:24 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  3 16:13:24 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:13:24 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  3 16:13:24 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:13:24 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  3 16:13:24 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  3 16:13:24 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  3 16:13:24 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:13:24 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:13:24 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:13:24 np0005544708 python3.9[109174]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:13:24 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:13:24 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:13:24 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:13:24 np0005544708 podman[109284]: 2025-12-03 21:13:24.67463092 +0000 UTC m=+0.058946248 container create ececa866e2916f47eb9900c4039e100c54861795fe9e0acd7176bc14d3f6e120 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Dec  3 16:13:24 np0005544708 systemd[1]: Started libpod-conmon-ececa866e2916f47eb9900c4039e100c54861795fe9e0acd7176bc14d3f6e120.scope.
Dec  3 16:13:24 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:13:24 np0005544708 podman[109284]: 2025-12-03 21:13:24.653105905 +0000 UTC m=+0.037421283 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:13:24 np0005544708 podman[109284]: 2025-12-03 21:13:24.763197321 +0000 UTC m=+0.147512679 container init ececa866e2916f47eb9900c4039e100c54861795fe9e0acd7176bc14d3f6e120 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:13:24 np0005544708 podman[109284]: 2025-12-03 21:13:24.775159701 +0000 UTC m=+0.159475069 container start ececa866e2916f47eb9900c4039e100c54861795fe9e0acd7176bc14d3f6e120 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:13:24 np0005544708 podman[109284]: 2025-12-03 21:13:24.779272631 +0000 UTC m=+0.163587969 container attach ececa866e2916f47eb9900c4039e100c54861795fe9e0acd7176bc14d3f6e120 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_turing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:13:24 np0005544708 thirsty_turing[109336]: 167 167
Dec  3 16:13:24 np0005544708 systemd[1]: libpod-ececa866e2916f47eb9900c4039e100c54861795fe9e0acd7176bc14d3f6e120.scope: Deactivated successfully.
Dec  3 16:13:24 np0005544708 podman[109284]: 2025-12-03 21:13:24.784793049 +0000 UTC m=+0.169108377 container died ececa866e2916f47eb9900c4039e100c54861795fe9e0acd7176bc14d3f6e120 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 16:13:24 np0005544708 systemd[1]: var-lib-containers-storage-overlay-11f0a95f9dc99f3006ec7aa399e5a074bfd8629007f6baac81b30f94a4a35d18-merged.mount: Deactivated successfully.
Dec  3 16:13:24 np0005544708 podman[109284]: 2025-12-03 21:13:24.826706821 +0000 UTC m=+0.211022159 container remove ececa866e2916f47eb9900c4039e100c54861795fe9e0acd7176bc14d3f6e120 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 16:13:24 np0005544708 systemd[1]: libpod-conmon-ececa866e2916f47eb9900c4039e100c54861795fe9e0acd7176bc14d3f6e120.scope: Deactivated successfully.
Dec  3 16:13:25 np0005544708 podman[109427]: 2025-12-03 21:13:25.006939824 +0000 UTC m=+0.062443762 container create 7a1e81dae286abc6245901e9ea14b804343050f540f4cb492d2c0b1b666d8536 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_shamir, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:13:25 np0005544708 systemd[1]: Started libpod-conmon-7a1e81dae286abc6245901e9ea14b804343050f540f4cb492d2c0b1b666d8536.scope.
Dec  3 16:13:25 np0005544708 podman[109427]: 2025-12-03 21:13:24.978077652 +0000 UTC m=+0.033581640 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:13:25 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:13:25 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a533eb7e03fc03bf36a511408a541ea6331fc595641ebb8c6923c884ff46b6d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:13:25 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a533eb7e03fc03bf36a511408a541ea6331fc595641ebb8c6923c884ff46b6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:13:25 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a533eb7e03fc03bf36a511408a541ea6331fc595641ebb8c6923c884ff46b6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:13:25 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a533eb7e03fc03bf36a511408a541ea6331fc595641ebb8c6923c884ff46b6d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:13:25 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a533eb7e03fc03bf36a511408a541ea6331fc595641ebb8c6923c884ff46b6d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:13:25 np0005544708 podman[109427]: 2025-12-03 21:13:25.116286341 +0000 UTC m=+0.171790279 container init 7a1e81dae286abc6245901e9ea14b804343050f540f4cb492d2c0b1b666d8536 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_shamir, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:13:25 np0005544708 podman[109427]: 2025-12-03 21:13:25.124637734 +0000 UTC m=+0.180141672 container start 7a1e81dae286abc6245901e9ea14b804343050f540f4cb492d2c0b1b666d8536 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030)
Dec  3 16:13:25 np0005544708 podman[109427]: 2025-12-03 21:13:25.129137025 +0000 UTC m=+0.184640993 container attach 7a1e81dae286abc6245901e9ea14b804343050f540f4cb492d2c0b1b666d8536 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:13:25 np0005544708 python3.9[109426]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 16:13:25 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v205: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:13:25 np0005544708 gallant_shamir[109443]: --> passed data devices: 0 physical, 3 LVM
Dec  3 16:13:25 np0005544708 gallant_shamir[109443]: --> All data devices are unavailable
Dec  3 16:13:25 np0005544708 systemd[1]: libpod-7a1e81dae286abc6245901e9ea14b804343050f540f4cb492d2c0b1b666d8536.scope: Deactivated successfully.
Dec  3 16:13:25 np0005544708 podman[109427]: 2025-12-03 21:13:25.733827079 +0000 UTC m=+0.789330987 container died 7a1e81dae286abc6245901e9ea14b804343050f540f4cb492d2c0b1b666d8536 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_shamir, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  3 16:13:25 np0005544708 systemd[1]: var-lib-containers-storage-overlay-6a533eb7e03fc03bf36a511408a541ea6331fc595641ebb8c6923c884ff46b6d-merged.mount: Deactivated successfully.
Dec  3 16:13:25 np0005544708 podman[109427]: 2025-12-03 21:13:25.781364431 +0000 UTC m=+0.836868339 container remove 7a1e81dae286abc6245901e9ea14b804343050f540f4cb492d2c0b1b666d8536 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec  3 16:13:25 np0005544708 systemd[1]: libpod-conmon-7a1e81dae286abc6245901e9ea14b804343050f540f4cb492d2c0b1b666d8536.scope: Deactivated successfully.
Dec  3 16:13:26 np0005544708 podman[109537]: 2025-12-03 21:13:26.282730268 +0000 UTC m=+0.044751727 container create de90cd022f8dc9e8a714fdd96c6abcb2940e035c4ae8653a967d1b29fce0a72d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:13:26 np0005544708 systemd[1]: Started libpod-conmon-de90cd022f8dc9e8a714fdd96c6abcb2940e035c4ae8653a967d1b29fce0a72d.scope.
Dec  3 16:13:26 np0005544708 podman[109537]: 2025-12-03 21:13:26.263378741 +0000 UTC m=+0.025400240 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:13:26 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:13:26 np0005544708 podman[109537]: 2025-12-03 21:13:26.398328393 +0000 UTC m=+0.160349942 container init de90cd022f8dc9e8a714fdd96c6abcb2940e035c4ae8653a967d1b29fce0a72d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_curie, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:13:26 np0005544708 podman[109537]: 2025-12-03 21:13:26.410137419 +0000 UTC m=+0.172158908 container start de90cd022f8dc9e8a714fdd96c6abcb2940e035c4ae8653a967d1b29fce0a72d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:13:26 np0005544708 podman[109537]: 2025-12-03 21:13:26.414759653 +0000 UTC m=+0.176781212 container attach de90cd022f8dc9e8a714fdd96c6abcb2940e035c4ae8653a967d1b29fce0a72d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec  3 16:13:26 np0005544708 infallible_curie[109554]: 167 167
Dec  3 16:13:26 np0005544708 systemd[1]: libpod-de90cd022f8dc9e8a714fdd96c6abcb2940e035c4ae8653a967d1b29fce0a72d.scope: Deactivated successfully.
Dec  3 16:13:26 np0005544708 podman[109537]: 2025-12-03 21:13:26.416222111 +0000 UTC m=+0.178243570 container died de90cd022f8dc9e8a714fdd96c6abcb2940e035c4ae8653a967d1b29fce0a72d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle)
Dec  3 16:13:26 np0005544708 systemd[1]: var-lib-containers-storage-overlay-c75db3a862c3d6f25e17a0afe423d032c173e74ee674fcbcc543b275f2f73135-merged.mount: Deactivated successfully.
Dec  3 16:13:26 np0005544708 podman[109537]: 2025-12-03 21:13:26.465707696 +0000 UTC m=+0.227729145 container remove de90cd022f8dc9e8a714fdd96c6abcb2940e035c4ae8653a967d1b29fce0a72d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_curie, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:13:26 np0005544708 systemd[1]: libpod-conmon-de90cd022f8dc9e8a714fdd96c6abcb2940e035c4ae8653a967d1b29fce0a72d.scope: Deactivated successfully.
Dec  3 16:13:26 np0005544708 podman[109600]: 2025-12-03 21:13:26.630665411 +0000 UTC m=+0.054011077 container create 817f0f2b97ffd928a48e4666d863bab72b3b486f68c66e150841f63d2d701cc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  3 16:13:26 np0005544708 systemd[1]: Started libpod-conmon-817f0f2b97ffd928a48e4666d863bab72b3b486f68c66e150841f63d2d701cc4.scope.
Dec  3 16:13:26 np0005544708 podman[109600]: 2025-12-03 21:13:26.603165035 +0000 UTC m=+0.026510791 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:13:26 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:13:26 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/593d36532f230cd0f1643f29b9aeb72c6a0c8ee58aea6344b2509f8d1d646cef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:13:26 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/593d36532f230cd0f1643f29b9aeb72c6a0c8ee58aea6344b2509f8d1d646cef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:13:26 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/593d36532f230cd0f1643f29b9aeb72c6a0c8ee58aea6344b2509f8d1d646cef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:13:26 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/593d36532f230cd0f1643f29b9aeb72c6a0c8ee58aea6344b2509f8d1d646cef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:13:26 np0005544708 podman[109600]: 2025-12-03 21:13:26.732847645 +0000 UTC m=+0.156193331 container init 817f0f2b97ffd928a48e4666d863bab72b3b486f68c66e150841f63d2d701cc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  3 16:13:26 np0005544708 podman[109600]: 2025-12-03 21:13:26.740734097 +0000 UTC m=+0.164079763 container start 817f0f2b97ffd928a48e4666d863bab72b3b486f68c66e150841f63d2d701cc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_leavitt, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  3 16:13:26 np0005544708 podman[109600]: 2025-12-03 21:13:26.745833193 +0000 UTC m=+0.169178879 container attach 817f0f2b97ffd928a48e4666d863bab72b3b486f68c66e150841f63d2d701cc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True)
Dec  3 16:13:26 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]: {
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:    "0": [
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:        {
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:            "devices": [
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "/dev/loop3"
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:            ],
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:            "lv_name": "ceph_lv0",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:            "lv_size": "21470642176",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d33bc95-baf8-481d-bc78-3b15ffd29872,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:            "lv_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:            "name": "ceph_lv0",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:            "tags": {
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.block_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.cluster_name": "ceph",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.crush_device_class": "",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.encrypted": "0",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.objectstore": "bluestore",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.osd_fsid": "4d33bc95-baf8-481d-bc78-3b15ffd29872",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.osd_id": "0",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.type": "block",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.vdo": "0",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.with_tpm": "0"
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:            },
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:            "type": "block",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:            "vg_name": "ceph_vg0"
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:        }
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:    ],
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:    "1": [
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:        {
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:            "devices": [
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "/dev/loop4"
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:            ],
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:            "lv_name": "ceph_lv1",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:            "lv_size": "21470642176",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c4086f1b-ff53-4e63-8dc0-011238d77976,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:            "lv_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:            "name": "ceph_lv1",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:            "tags": {
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.block_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.cluster_name": "ceph",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.crush_device_class": "",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.encrypted": "0",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.objectstore": "bluestore",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.osd_fsid": "c4086f1b-ff53-4e63-8dc0-011238d77976",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.osd_id": "1",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.type": "block",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.vdo": "0",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.with_tpm": "0"
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:            },
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:            "type": "block",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:            "vg_name": "ceph_vg1"
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:        }
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:    ],
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:    "2": [
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:        {
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:            "devices": [
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "/dev/loop5"
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:            ],
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:            "lv_name": "ceph_lv2",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:            "lv_size": "21470642176",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=abcd6a67-9013-4470-978f-f75da5f33cd4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:            "lv_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:            "name": "ceph_lv2",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:            "tags": {
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.block_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.cluster_name": "ceph",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.crush_device_class": "",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.encrypted": "0",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.objectstore": "bluestore",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.osd_fsid": "abcd6a67-9013-4470-978f-f75da5f33cd4",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.osd_id": "2",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.type": "block",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.vdo": "0",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:                "ceph.with_tpm": "0"
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:            },
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:            "type": "block",
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:            "vg_name": "ceph_vg2"
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:        }
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]:    ]
Dec  3 16:13:27 np0005544708 jovial_leavitt[109617]: }
Dec  3 16:13:27 np0005544708 systemd[1]: libpod-817f0f2b97ffd928a48e4666d863bab72b3b486f68c66e150841f63d2d701cc4.scope: Deactivated successfully.
Dec  3 16:13:27 np0005544708 podman[109600]: 2025-12-03 21:13:27.102901199 +0000 UTC m=+0.526246935 container died 817f0f2b97ffd928a48e4666d863bab72b3b486f68c66e150841f63d2d701cc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_leavitt, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 16:13:27 np0005544708 systemd[1]: var-lib-containers-storage-overlay-593d36532f230cd0f1643f29b9aeb72c6a0c8ee58aea6344b2509f8d1d646cef-merged.mount: Deactivated successfully.
Dec  3 16:13:27 np0005544708 podman[109600]: 2025-12-03 21:13:27.163781259 +0000 UTC m=+0.587126955 container remove 817f0f2b97ffd928a48e4666d863bab72b3b486f68c66e150841f63d2d701cc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_leavitt, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 16:13:27 np0005544708 systemd[1]: libpod-conmon-817f0f2b97ffd928a48e4666d863bab72b3b486f68c66e150841f63d2d701cc4.scope: Deactivated successfully.
Dec  3 16:13:27 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v206: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:13:27 np0005544708 python3.9[109753]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 16:13:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 16:13:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:13:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 16:13:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:13:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:13:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:13:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:13:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:13:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:13:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:13:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:13:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:13:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.7344613959254126e-06 of space, bias 4.0, pg target 0.002081353675110495 quantized to 16 (current 16)
Dec  3 16:13:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:13:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:13:27 np0005544708 podman[109881]: 2025-12-03 21:13:27.701220492 +0000 UTC m=+0.052817974 container create 84283aaf51ee6fbd337265037466b8790a80dda67424df51a239f55cf37e6986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec  3 16:13:27 np0005544708 systemd[1]: Started libpod-conmon-84283aaf51ee6fbd337265037466b8790a80dda67424df51a239f55cf37e6986.scope.
Dec  3 16:13:27 np0005544708 podman[109881]: 2025-12-03 21:13:27.684172376 +0000 UTC m=+0.035769868 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:13:27 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:13:27 np0005544708 podman[109881]: 2025-12-03 21:13:27.809357116 +0000 UTC m=+0.160954608 container init 84283aaf51ee6fbd337265037466b8790a80dda67424df51a239f55cf37e6986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_brattain, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:13:27 np0005544708 podman[109881]: 2025-12-03 21:13:27.821369798 +0000 UTC m=+0.172967310 container start 84283aaf51ee6fbd337265037466b8790a80dda67424df51a239f55cf37e6986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_brattain, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec  3 16:13:27 np0005544708 podman[109881]: 2025-12-03 21:13:27.825396736 +0000 UTC m=+0.176994228 container attach 84283aaf51ee6fbd337265037466b8790a80dda67424df51a239f55cf37e6986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 16:13:27 np0005544708 agitated_brattain[109941]: 167 167
Dec  3 16:13:27 np0005544708 systemd[1]: libpod-84283aaf51ee6fbd337265037466b8790a80dda67424df51a239f55cf37e6986.scope: Deactivated successfully.
Dec  3 16:13:27 np0005544708 podman[109881]: 2025-12-03 21:13:27.828842878 +0000 UTC m=+0.180440420 container died 84283aaf51ee6fbd337265037466b8790a80dda67424df51a239f55cf37e6986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_brattain, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Dec  3 16:13:27 np0005544708 systemd[1]: var-lib-containers-storage-overlay-965802ae162c59b72d941624b2a6269f63d6e6e2aac0afa1942601f39185d983-merged.mount: Deactivated successfully.
Dec  3 16:13:27 np0005544708 podman[109881]: 2025-12-03 21:13:27.883251834 +0000 UTC m=+0.234849316 container remove 84283aaf51ee6fbd337265037466b8790a80dda67424df51a239f55cf37e6986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_brattain, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:13:27 np0005544708 systemd[1]: libpod-conmon-84283aaf51ee6fbd337265037466b8790a80dda67424df51a239f55cf37e6986.scope: Deactivated successfully.
Dec  3 16:13:28 np0005544708 podman[110024]: 2025-12-03 21:13:28.105457851 +0000 UTC m=+0.057408727 container create 6dd5bff3bee5e442811fd03f928034f40c606689843645d18f1eaff81384d0b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_burnell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:13:28 np0005544708 systemd[76584]: Created slice User Background Tasks Slice.
Dec  3 16:13:28 np0005544708 systemd[76584]: Starting Cleanup of User's Temporary Files and Directories...
Dec  3 16:13:28 np0005544708 systemd[1]: Started libpod-conmon-6dd5bff3bee5e442811fd03f928034f40c606689843645d18f1eaff81384d0b5.scope.
Dec  3 16:13:28 np0005544708 systemd[76584]: Finished Cleanup of User's Temporary Files and Directories.
Dec  3 16:13:28 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:13:28 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d08f100ec548369a78c276cac6c4e72884f6eb6c1f47fa08dda7fcc9f96e894/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:13:28 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d08f100ec548369a78c276cac6c4e72884f6eb6c1f47fa08dda7fcc9f96e894/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:13:28 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d08f100ec548369a78c276cac6c4e72884f6eb6c1f47fa08dda7fcc9f96e894/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:13:28 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d08f100ec548369a78c276cac6c4e72884f6eb6c1f47fa08dda7fcc9f96e894/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:13:28 np0005544708 podman[110024]: 2025-12-03 21:13:28.174229082 +0000 UTC m=+0.126179958 container init 6dd5bff3bee5e442811fd03f928034f40c606689843645d18f1eaff81384d0b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:13:28 np0005544708 podman[110024]: 2025-12-03 21:13:28.079815585 +0000 UTC m=+0.031766451 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:13:28 np0005544708 podman[110024]: 2025-12-03 21:13:28.193458807 +0000 UTC m=+0.145409673 container start 6dd5bff3bee5e442811fd03f928034f40c606689843645d18f1eaff81384d0b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec  3 16:13:28 np0005544708 podman[110024]: 2025-12-03 21:13:28.197301639 +0000 UTC m=+0.149252515 container attach 6dd5bff3bee5e442811fd03f928034f40c606689843645d18f1eaff81384d0b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_burnell, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:13:28 np0005544708 python3.9[110018]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:13:28 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.c scrub starts
Dec  3 16:13:28 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.c scrub ok
Dec  3 16:13:28 np0005544708 lvm[110271]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:13:28 np0005544708 lvm[110271]: VG ceph_vg0 finished
Dec  3 16:13:28 np0005544708 lvm[110274]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:13:28 np0005544708 lvm[110274]: VG ceph_vg1 finished
Dec  3 16:13:28 np0005544708 python3.9[110253]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:13:28 np0005544708 lvm[110276]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:13:28 np0005544708 lvm[110276]: VG ceph_vg2 finished
Dec  3 16:13:29 np0005544708 compassionate_burnell[110042]: {}
Dec  3 16:13:29 np0005544708 systemd[1]: libpod-6dd5bff3bee5e442811fd03f928034f40c606689843645d18f1eaff81384d0b5.scope: Deactivated successfully.
Dec  3 16:13:29 np0005544708 systemd[1]: libpod-6dd5bff3bee5e442811fd03f928034f40c606689843645d18f1eaff81384d0b5.scope: Consumed 1.382s CPU time.
Dec  3 16:13:29 np0005544708 podman[110024]: 2025-12-03 21:13:29.118718029 +0000 UTC m=+1.070668885 container died 6dd5bff3bee5e442811fd03f928034f40c606689843645d18f1eaff81384d0b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_burnell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:13:29 np0005544708 systemd[1]: var-lib-containers-storage-overlay-2d08f100ec548369a78c276cac6c4e72884f6eb6c1f47fa08dda7fcc9f96e894-merged.mount: Deactivated successfully.
Dec  3 16:13:29 np0005544708 podman[110024]: 2025-12-03 21:13:29.164206537 +0000 UTC m=+1.116157363 container remove 6dd5bff3bee5e442811fd03f928034f40c606689843645d18f1eaff81384d0b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_burnell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec  3 16:13:29 np0005544708 systemd[1]: libpod-conmon-6dd5bff3bee5e442811fd03f928034f40c606689843645d18f1eaff81384d0b5.scope: Deactivated successfully.
Dec  3 16:13:29 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:13:29 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:13:29 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:13:29 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v207: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:13:29 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:13:29 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Dec  3 16:13:29 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Dec  3 16:13:29 np0005544708 python3.9[110467]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:13:29 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:13:29 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:13:30 np0005544708 python3.9[110620]: ansible-service_facts Invoked
Dec  3 16:13:30 np0005544708 network[110637]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  3 16:13:30 np0005544708 network[110638]: 'network-scripts' will be removed from distribution in near future.
Dec  3 16:13:30 np0005544708 network[110639]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  3 16:13:31 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v208: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:13:31 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Dec  3 16:13:31 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Dec  3 16:13:31 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:13:33 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v209: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:13:33 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Dec  3 16:13:33 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Dec  3 16:13:35 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v210: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:13:35 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Dec  3 16:13:35 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Dec  3 16:13:36 np0005544708 python3.9[111091]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 16:13:36 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:13:37 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v211: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:13:38 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Dec  3 16:13:38 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Dec  3 16:13:38 np0005544708 python3.9[111244]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec  3 16:13:39 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v212: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:13:39 np0005544708 python3.9[111396]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:13:40 np0005544708 python3.9[111474]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:13:40 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Dec  3 16:13:40 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Dec  3 16:13:41 np0005544708 python3.9[111626]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:13:41 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v213: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:13:41 np0005544708 python3.9[111704]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:13:41 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:13:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.b scrub starts
Dec  3 16:13:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.b scrub ok
Dec  3 16:13:42 np0005544708 python3.9[111856]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:13:43 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v214: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:13:43 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Dec  3 16:13:43 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Dec  3 16:13:43 np0005544708 python3.9[112008]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 16:13:44 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Dec  3 16:13:44 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Dec  3 16:13:45 np0005544708 python3.9[112092]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:13:45 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v215: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:13:45 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Dec  3 16:13:45 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Dec  3 16:13:45 np0005544708 systemd[1]: session-37.scope: Deactivated successfully.
Dec  3 16:13:45 np0005544708 systemd[1]: session-37.scope: Consumed 26.024s CPU time.
Dec  3 16:13:45 np0005544708 systemd-logind[787]: Session 37 logged out. Waiting for processes to exit.
Dec  3 16:13:45 np0005544708 systemd-logind[787]: Removed session 37.
Dec  3 16:13:46 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.d scrub starts
Dec  3 16:13:46 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.d scrub ok
Dec  3 16:13:46 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:13:47 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v216: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:13:47 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.c scrub starts
Dec  3 16:13:47 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.c scrub ok
Dec  3 16:13:49 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v217: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:13:51 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v218: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:13:51 np0005544708 systemd-logind[787]: New session 38 of user zuul.
Dec  3 16:13:51 np0005544708 systemd[1]: Started Session 38 of User zuul.
Dec  3 16:13:51 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.e scrub starts
Dec  3 16:13:51 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.e scrub ok
Dec  3 16:13:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:13:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:13:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:13:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:13:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:13:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:13:51 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:13:52 np0005544708 python3.9[112274]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:13:53 np0005544708 python3.9[112426]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:13:53 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v219: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:13:53 np0005544708 python3.9[112504]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:13:54 np0005544708 systemd[1]: session-38.scope: Deactivated successfully.
Dec  3 16:13:54 np0005544708 systemd[1]: session-38.scope: Consumed 1.811s CPU time.
Dec  3 16:13:54 np0005544708 systemd-logind[787]: Session 38 logged out. Waiting for processes to exit.
Dec  3 16:13:54 np0005544708 systemd-logind[787]: Removed session 38.
Dec  3 16:13:55 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v220: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:13:56 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:13:57 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v221: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:13:59 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v222: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:13:59 np0005544708 systemd-logind[787]: New session 39 of user zuul.
Dec  3 16:13:59 np0005544708 systemd[1]: Started Session 39 of User zuul.
Dec  3 16:14:01 np0005544708 python3.9[112682]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 16:14:01 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v223: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:14:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:14:02 np0005544708 python3.9[112838]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:14:03 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v224: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:14:03 np0005544708 python3.9[113013]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:14:03 np0005544708 python3.9[113091]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.r4rq9cn2 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:14:04 np0005544708 python3.9[113243]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:14:05 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v225: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:14:05 np0005544708 python3.9[113321]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.h9vsxned recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:14:06 np0005544708 python3.9[113473]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:14:06 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:14:06 np0005544708 python3.9[113625]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:14:07 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v226: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:14:07 np0005544708 python3.9[113703]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:14:07 np0005544708 python3.9[113855]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:14:08 np0005544708 python3.9[113933]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:14:09 np0005544708 python3.9[114085]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:14:09 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v227: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:14:09 np0005544708 python3.9[114237]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:14:10 np0005544708 python3.9[114316]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:14:11 np0005544708 python3.9[114468]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:14:11 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v228: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:14:11 np0005544708 python3.9[114546]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:14:11 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:14:12 np0005544708 python3.9[114698]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:14:12 np0005544708 systemd[1]: Reloading.
Dec  3 16:14:12 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:14:12 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:14:13 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v229: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:14:13 np0005544708 python3.9[114888]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:14:14 np0005544708 python3.9[114966]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:14:15 np0005544708 python3.9[115118]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:14:15 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v230: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:14:15 np0005544708 python3.9[115196]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:14:16 np0005544708 python3.9[115348]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:14:16 np0005544708 systemd[1]: Reloading.
Dec  3 16:14:16 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:14:16 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:14:16 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:14:16 np0005544708 systemd[1]: Starting Create netns directory...
Dec  3 16:14:16 np0005544708 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  3 16:14:16 np0005544708 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  3 16:14:16 np0005544708 systemd[1]: Finished Create netns directory.
Dec  3 16:14:17 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v231: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:14:17 np0005544708 python3.9[115543]: ansible-ansible.builtin.service_facts Invoked
Dec  3 16:14:17 np0005544708 network[115560]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  3 16:14:17 np0005544708 network[115561]: 'network-scripts' will be removed from distribution in near future.
Dec  3 16:14:17 np0005544708 network[115562]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  3 16:14:19 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v232: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:14:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Optimize plan auto_2025-12-03_21:14:21
Dec  3 16:14:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 16:14:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] do_upmap
Dec  3 16:14:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'volumes', 'vms', 'backups', 'images']
Dec  3 16:14:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] prepared 0/10 upmap changes
Dec  3 16:14:21 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v233: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:14:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:14:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:14:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:14:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:14:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:14:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:14:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:14:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 16:14:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:14:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 16:14:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:14:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:14:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:14:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:14:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:14:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:14:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:14:21 np0005544708 python3.9[115824]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:14:22 np0005544708 python3.9[115902]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:14:23 np0005544708 python3.9[116054]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:14:23 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v234: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:14:23 np0005544708 python3.9[116206]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:14:24 np0005544708 python3.9[116284]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:14:25 np0005544708 python3.9[116436]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  3 16:14:25 np0005544708 systemd[1]: Starting Time & Date Service...
Dec  3 16:14:25 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v235: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:14:25 np0005544708 systemd[1]: Started Time & Date Service.
Dec  3 16:14:26 np0005544708 python3.9[116592]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:14:26 np0005544708 python3.9[116744]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:14:27 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v236: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:14:27 np0005544708 python3.9[116822]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:14:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 16:14:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:14:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 16:14:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:14:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:14:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:14:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:14:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:14:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:14:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:14:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:14:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:14:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.7344613959254126e-06 of space, bias 4.0, pg target 0.002081353675110495 quantized to 16 (current 16)
Dec  3 16:14:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:14:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:14:27 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:14:28 np0005544708 python3.9[116974]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:14:28 np0005544708 python3.9[117052]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.lo55t4w7 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:14:29 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v237: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:14:29 np0005544708 python3.9[117204]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:14:29 np0005544708 python3.9[117346]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:14:29 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:14:29 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:14:29 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  3 16:14:29 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:14:29 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  3 16:14:29 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:14:29 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  3 16:14:29 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  3 16:14:29 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  3 16:14:29 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:14:29 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:14:29 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:14:30 np0005544708 podman[117501]: 2025-12-03 21:14:30.378031828 +0000 UTC m=+0.027910311 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:14:30 np0005544708 python3.9[117590]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:14:31 np0005544708 podman[117501]: 2025-12-03 21:14:31.036139876 +0000 UTC m=+0.686018359 container create 72a7841b0f36710d63d6788970562f298be7c9e31a5064a63831afc4ff07151b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_carver, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:14:31 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:14:31 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:14:31 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:14:31 np0005544708 systemd[1]: Started libpod-conmon-72a7841b0f36710d63d6788970562f298be7c9e31a5064a63831afc4ff07151b.scope.
Dec  3 16:14:31 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:14:31 np0005544708 podman[117501]: 2025-12-03 21:14:31.179035233 +0000 UTC m=+0.828913736 container init 72a7841b0f36710d63d6788970562f298be7c9e31a5064a63831afc4ff07151b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  3 16:14:31 np0005544708 podman[117501]: 2025-12-03 21:14:31.190307199 +0000 UTC m=+0.840185682 container start 72a7841b0f36710d63d6788970562f298be7c9e31a5064a63831afc4ff07151b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_carver, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec  3 16:14:31 np0005544708 romantic_carver[117670]: 167 167
Dec  3 16:14:31 np0005544708 podman[117501]: 2025-12-03 21:14:31.194701819 +0000 UTC m=+0.844580372 container attach 72a7841b0f36710d63d6788970562f298be7c9e31a5064a63831afc4ff07151b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_carver, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:14:31 np0005544708 systemd[1]: libpod-72a7841b0f36710d63d6788970562f298be7c9e31a5064a63831afc4ff07151b.scope: Deactivated successfully.
Dec  3 16:14:31 np0005544708 podman[117501]: 2025-12-03 21:14:31.196396295 +0000 UTC m=+0.846274818 container died 72a7841b0f36710d63d6788970562f298be7c9e31a5064a63831afc4ff07151b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_carver, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  3 16:14:31 np0005544708 systemd[1]: var-lib-containers-storage-overlay-e27a0d0b4b65d96262889fceed5aad13e678fc7e6642b0d16636335ca7736fcf-merged.mount: Deactivated successfully.
Dec  3 16:14:31 np0005544708 podman[117501]: 2025-12-03 21:14:31.24548111 +0000 UTC m=+0.895359553 container remove 72a7841b0f36710d63d6788970562f298be7c9e31a5064a63831afc4ff07151b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec  3 16:14:31 np0005544708 systemd[1]: libpod-conmon-72a7841b0f36710d63d6788970562f298be7c9e31a5064a63831afc4ff07151b.scope: Deactivated successfully.
Dec  3 16:14:31 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v238: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:14:31 np0005544708 podman[117717]: 2025-12-03 21:14:31.442883838 +0000 UTC m=+0.063061125 container create ad1e0b2da5b95170826759fb2f58fd4ac344ae70b864378eac097c7bd35f4a7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_greider, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 16:14:31 np0005544708 systemd[1]: Started libpod-conmon-ad1e0b2da5b95170826759fb2f58fd4ac344ae70b864378eac097c7bd35f4a7e.scope.
Dec  3 16:14:31 np0005544708 podman[117717]: 2025-12-03 21:14:31.420463849 +0000 UTC m=+0.040641166 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:14:31 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:14:31 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fed73fb47b181cd95fd48c0dfe09f2f6c906b3fee2203cdc98c434e8469c4e22/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:14:31 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fed73fb47b181cd95fd48c0dfe09f2f6c906b3fee2203cdc98c434e8469c4e22/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:14:31 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fed73fb47b181cd95fd48c0dfe09f2f6c906b3fee2203cdc98c434e8469c4e22/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:14:31 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fed73fb47b181cd95fd48c0dfe09f2f6c906b3fee2203cdc98c434e8469c4e22/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:14:31 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fed73fb47b181cd95fd48c0dfe09f2f6c906b3fee2203cdc98c434e8469c4e22/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:14:31 np0005544708 podman[117717]: 2025-12-03 21:14:31.53079714 +0000 UTC m=+0.150974427 container init ad1e0b2da5b95170826759fb2f58fd4ac344ae70b864378eac097c7bd35f4a7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec  3 16:14:31 np0005544708 podman[117717]: 2025-12-03 21:14:31.539082955 +0000 UTC m=+0.159260222 container start ad1e0b2da5b95170826759fb2f58fd4ac344ae70b864378eac097c7bd35f4a7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_greider, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 16:14:31 np0005544708 podman[117717]: 2025-12-03 21:14:31.54332433 +0000 UTC m=+0.163501627 container attach ad1e0b2da5b95170826759fb2f58fd4ac344ae70b864378eac097c7bd35f4a7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_greider, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec  3 16:14:31 np0005544708 python3[117789]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  3 16:14:31 np0005544708 clever_greider[117779]: --> passed data devices: 0 physical, 3 LVM
Dec  3 16:14:32 np0005544708 clever_greider[117779]: --> All data devices are unavailable
Dec  3 16:14:32 np0005544708 systemd[1]: libpod-ad1e0b2da5b95170826759fb2f58fd4ac344ae70b864378eac097c7bd35f4a7e.scope: Deactivated successfully.
Dec  3 16:14:32 np0005544708 podman[117717]: 2025-12-03 21:14:32.042239329 +0000 UTC m=+0.662416636 container died ad1e0b2da5b95170826759fb2f58fd4ac344ae70b864378eac097c7bd35f4a7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:14:32 np0005544708 systemd[1]: var-lib-containers-storage-overlay-fed73fb47b181cd95fd48c0dfe09f2f6c906b3fee2203cdc98c434e8469c4e22-merged.mount: Deactivated successfully.
Dec  3 16:14:32 np0005544708 podman[117717]: 2025-12-03 21:14:32.09628817 +0000 UTC m=+0.716465467 container remove ad1e0b2da5b95170826759fb2f58fd4ac344ae70b864378eac097c7bd35f4a7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_greider, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:14:32 np0005544708 systemd[1]: libpod-conmon-ad1e0b2da5b95170826759fb2f58fd4ac344ae70b864378eac097c7bd35f4a7e.scope: Deactivated successfully.
Dec  3 16:14:32 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:14:32 np0005544708 podman[118033]: 2025-12-03 21:14:32.606375112 +0000 UTC m=+0.069362557 container create 7f0ccf942c6e1f9c11204cdd3fb56e103f6a6fddbc9d7d1717b039a2817f69da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_solomon, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 16:14:32 np0005544708 python3.9[118020]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:14:32 np0005544708 systemd[1]: Started libpod-conmon-7f0ccf942c6e1f9c11204cdd3fb56e103f6a6fddbc9d7d1717b039a2817f69da.scope.
Dec  3 16:14:32 np0005544708 podman[118033]: 2025-12-03 21:14:32.576366286 +0000 UTC m=+0.039353781 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:14:32 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:14:32 np0005544708 podman[118033]: 2025-12-03 21:14:32.717284408 +0000 UTC m=+0.180271813 container init 7f0ccf942c6e1f9c11204cdd3fb56e103f6a6fddbc9d7d1717b039a2817f69da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_solomon, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Dec  3 16:14:32 np0005544708 podman[118033]: 2025-12-03 21:14:32.729227204 +0000 UTC m=+0.192214649 container start 7f0ccf942c6e1f9c11204cdd3fb56e103f6a6fddbc9d7d1717b039a2817f69da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_solomon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:14:32 np0005544708 silly_solomon[118052]: 167 167
Dec  3 16:14:32 np0005544708 podman[118033]: 2025-12-03 21:14:32.733440328 +0000 UTC m=+0.196427733 container attach 7f0ccf942c6e1f9c11204cdd3fb56e103f6a6fddbc9d7d1717b039a2817f69da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_solomon, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:14:32 np0005544708 systemd[1]: libpod-7f0ccf942c6e1f9c11204cdd3fb56e103f6a6fddbc9d7d1717b039a2817f69da.scope: Deactivated successfully.
Dec  3 16:14:32 np0005544708 podman[118033]: 2025-12-03 21:14:32.734548449 +0000 UTC m=+0.197535854 container died 7f0ccf942c6e1f9c11204cdd3fb56e103f6a6fddbc9d7d1717b039a2817f69da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:14:32 np0005544708 systemd[1]: var-lib-containers-storage-overlay-bf4f6b216ef3ba2c187af50845cc8de32618126f629069858004a31c4f6f6a57-merged.mount: Deactivated successfully.
Dec  3 16:14:32 np0005544708 podman[118033]: 2025-12-03 21:14:32.77028134 +0000 UTC m=+0.233268755 container remove 7f0ccf942c6e1f9c11204cdd3fb56e103f6a6fddbc9d7d1717b039a2817f69da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_solomon, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:14:32 np0005544708 systemd[1]: libpod-conmon-7f0ccf942c6e1f9c11204cdd3fb56e103f6a6fddbc9d7d1717b039a2817f69da.scope: Deactivated successfully.
Dec  3 16:14:32 np0005544708 podman[118124]: 2025-12-03 21:14:32.928297698 +0000 UTC m=+0.052838738 container create e4f9da23c2f9f64c5c67e3ac3fb57c0ac7ec9a456cad56cee52a428b7fbe84b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 16:14:32 np0005544708 systemd[1]: Started libpod-conmon-e4f9da23c2f9f64c5c67e3ac3fb57c0ac7ec9a456cad56cee52a428b7fbe84b0.scope.
Dec  3 16:14:32 np0005544708 podman[118124]: 2025-12-03 21:14:32.903157934 +0000 UTC m=+0.027698874 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:14:33 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:14:33 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0598e5d79eb2b2105863011169406716bdb438d2f14acf19470f4375b59b9a4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:14:33 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0598e5d79eb2b2105863011169406716bdb438d2f14acf19470f4375b59b9a4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:14:33 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0598e5d79eb2b2105863011169406716bdb438d2f14acf19470f4375b59b9a4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:14:33 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0598e5d79eb2b2105863011169406716bdb438d2f14acf19470f4375b59b9a4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:14:33 np0005544708 podman[118124]: 2025-12-03 21:14:33.026817047 +0000 UTC m=+0.151358047 container init e4f9da23c2f9f64c5c67e3ac3fb57c0ac7ec9a456cad56cee52a428b7fbe84b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec  3 16:14:33 np0005544708 podman[118124]: 2025-12-03 21:14:33.039499241 +0000 UTC m=+0.164040181 container start e4f9da23c2f9f64c5c67e3ac3fb57c0ac7ec9a456cad56cee52a428b7fbe84b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_kirch, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:14:33 np0005544708 podman[118124]: 2025-12-03 21:14:33.049268668 +0000 UTC m=+0.173809578 container attach e4f9da23c2f9f64c5c67e3ac3fb57c0ac7ec9a456cad56cee52a428b7fbe84b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_kirch, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec  3 16:14:33 np0005544708 python3.9[118163]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:14:33 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v239: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:14:33 np0005544708 musing_kirch[118166]: {
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:    "0": [
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:        {
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:            "devices": [
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "/dev/loop3"
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:            ],
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:            "lv_name": "ceph_lv0",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:            "lv_size": "21470642176",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d33bc95-baf8-481d-bc78-3b15ffd29872,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:            "lv_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:            "name": "ceph_lv0",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:            "tags": {
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.block_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.cluster_name": "ceph",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.crush_device_class": "",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.encrypted": "0",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.objectstore": "bluestore",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.osd_fsid": "4d33bc95-baf8-481d-bc78-3b15ffd29872",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.osd_id": "0",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.type": "block",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.vdo": "0",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.with_tpm": "0"
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:            },
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:            "type": "block",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:            "vg_name": "ceph_vg0"
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:        }
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:    ],
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:    "1": [
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:        {
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:            "devices": [
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "/dev/loop4"
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:            ],
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:            "lv_name": "ceph_lv1",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:            "lv_size": "21470642176",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c4086f1b-ff53-4e63-8dc0-011238d77976,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:            "lv_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:            "name": "ceph_lv1",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:            "tags": {
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.block_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.cluster_name": "ceph",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.crush_device_class": "",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.encrypted": "0",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.objectstore": "bluestore",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.osd_fsid": "c4086f1b-ff53-4e63-8dc0-011238d77976",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.osd_id": "1",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.type": "block",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.vdo": "0",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.with_tpm": "0"
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:            },
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:            "type": "block",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:            "vg_name": "ceph_vg1"
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:        }
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:    ],
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:    "2": [
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:        {
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:            "devices": [
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "/dev/loop5"
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:            ],
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:            "lv_name": "ceph_lv2",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:            "lv_size": "21470642176",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=abcd6a67-9013-4470-978f-f75da5f33cd4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:            "lv_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:            "name": "ceph_lv2",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:            "tags": {
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.block_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.cluster_name": "ceph",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.crush_device_class": "",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.encrypted": "0",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.objectstore": "bluestore",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.osd_fsid": "abcd6a67-9013-4470-978f-f75da5f33cd4",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.osd_id": "2",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.type": "block",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.vdo": "0",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:                "ceph.with_tpm": "0"
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:            },
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:            "type": "block",
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:            "vg_name": "ceph_vg2"
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:        }
Dec  3 16:14:33 np0005544708 musing_kirch[118166]:    ]
Dec  3 16:14:33 np0005544708 musing_kirch[118166]: }
Dec  3 16:14:33 np0005544708 systemd[1]: libpod-e4f9da23c2f9f64c5c67e3ac3fb57c0ac7ec9a456cad56cee52a428b7fbe84b0.scope: Deactivated successfully.
Dec  3 16:14:33 np0005544708 podman[118124]: 2025-12-03 21:14:33.356910365 +0000 UTC m=+0.481451285 container died e4f9da23c2f9f64c5c67e3ac3fb57c0ac7ec9a456cad56cee52a428b7fbe84b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_kirch, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:14:33 np0005544708 systemd[1]: var-lib-containers-storage-overlay-0598e5d79eb2b2105863011169406716bdb438d2f14acf19470f4375b59b9a4a-merged.mount: Deactivated successfully.
Dec  3 16:14:33 np0005544708 podman[118124]: 2025-12-03 21:14:33.396002208 +0000 UTC m=+0.520543118 container remove e4f9da23c2f9f64c5c67e3ac3fb57c0ac7ec9a456cad56cee52a428b7fbe84b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:14:33 np0005544708 systemd[1]: libpod-conmon-e4f9da23c2f9f64c5c67e3ac3fb57c0ac7ec9a456cad56cee52a428b7fbe84b0.scope: Deactivated successfully.
Dec  3 16:14:33 np0005544708 python3.9[118386]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:14:33 np0005544708 podman[118404]: 2025-12-03 21:14:33.866639198 +0000 UTC m=+0.046961588 container create 640ec46ac95a9c31a7e89efbacb0c19ca9337302398eaceee4079f5ce8297ff9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_carver, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:14:33 np0005544708 systemd[1]: Started libpod-conmon-640ec46ac95a9c31a7e89efbacb0c19ca9337302398eaceee4079f5ce8297ff9.scope.
Dec  3 16:14:33 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:14:33 np0005544708 podman[118404]: 2025-12-03 21:14:33.840400584 +0000 UTC m=+0.020723034 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:14:33 np0005544708 podman[118404]: 2025-12-03 21:14:33.943878788 +0000 UTC m=+0.124201228 container init 640ec46ac95a9c31a7e89efbacb0c19ca9337302398eaceee4079f5ce8297ff9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_carver, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:14:33 np0005544708 podman[118404]: 2025-12-03 21:14:33.950619012 +0000 UTC m=+0.130941402 container start 640ec46ac95a9c31a7e89efbacb0c19ca9337302398eaceee4079f5ce8297ff9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_carver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:14:33 np0005544708 frosty_carver[118443]: 167 167
Dec  3 16:14:33 np0005544708 systemd[1]: libpod-640ec46ac95a9c31a7e89efbacb0c19ca9337302398eaceee4079f5ce8297ff9.scope: Deactivated successfully.
Dec  3 16:14:33 np0005544708 podman[118404]: 2025-12-03 21:14:33.988695267 +0000 UTC m=+0.169017697 container attach 640ec46ac95a9c31a7e89efbacb0c19ca9337302398eaceee4079f5ce8297ff9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_carver, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  3 16:14:33 np0005544708 podman[118404]: 2025-12-03 21:14:33.989218291 +0000 UTC m=+0.169540711 container died 640ec46ac95a9c31a7e89efbacb0c19ca9337302398eaceee4079f5ce8297ff9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_carver, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:14:34 np0005544708 python3.9[118511]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:14:34 np0005544708 systemd[1]: var-lib-containers-storage-overlay-4321be52e9e7caafd66a159d49fff1e1d552ca82ee1600f392dee7c05ae3f7d0-merged.mount: Deactivated successfully.
Dec  3 16:14:34 np0005544708 podman[118404]: 2025-12-03 21:14:34.335863289 +0000 UTC m=+0.516185669 container remove 640ec46ac95a9c31a7e89efbacb0c19ca9337302398eaceee4079f5ce8297ff9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec  3 16:14:34 np0005544708 systemd[1]: libpod-conmon-640ec46ac95a9c31a7e89efbacb0c19ca9337302398eaceee4079f5ce8297ff9.scope: Deactivated successfully.
Dec  3 16:14:34 np0005544708 podman[118569]: 2025-12-03 21:14:34.501990567 +0000 UTC m=+0.046290720 container create 933979592f23e32b26f05f64a5a270b81385af41d3118f7559e7dd6e83c10cdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hoover, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  3 16:14:34 np0005544708 systemd[1]: Started libpod-conmon-933979592f23e32b26f05f64a5a270b81385af41d3118f7559e7dd6e83c10cdb.scope.
Dec  3 16:14:34 np0005544708 podman[118569]: 2025-12-03 21:14:34.480146044 +0000 UTC m=+0.024446287 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:14:34 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:14:34 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e28c14796f19487aa2876bae9f6b99411d0208e4b0c6dec01f7f3b30813e787e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:14:34 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e28c14796f19487aa2876bae9f6b99411d0208e4b0c6dec01f7f3b30813e787e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:14:34 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e28c14796f19487aa2876bae9f6b99411d0208e4b0c6dec01f7f3b30813e787e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:14:34 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e28c14796f19487aa2876bae9f6b99411d0208e4b0c6dec01f7f3b30813e787e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:14:34 np0005544708 podman[118569]: 2025-12-03 21:14:34.602209333 +0000 UTC m=+0.146509596 container init 933979592f23e32b26f05f64a5a270b81385af41d3118f7559e7dd6e83c10cdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hoover, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec  3 16:14:34 np0005544708 podman[118569]: 2025-12-03 21:14:34.610437967 +0000 UTC m=+0.154738130 container start 933979592f23e32b26f05f64a5a270b81385af41d3118f7559e7dd6e83c10cdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hoover, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 16:14:34 np0005544708 podman[118569]: 2025-12-03 21:14:34.614297832 +0000 UTC m=+0.158598035 container attach 933979592f23e32b26f05f64a5a270b81385af41d3118f7559e7dd6e83c10cdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec  3 16:14:35 np0005544708 python3.9[118699]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:14:35 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v240: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:14:35 np0005544708 lvm[118844]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:14:35 np0005544708 lvm[118844]: VG ceph_vg0 finished
Dec  3 16:14:35 np0005544708 lvm[118847]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:14:35 np0005544708 lvm[118847]: VG ceph_vg1 finished
Dec  3 16:14:35 np0005544708 lvm[118849]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:14:35 np0005544708 lvm[118849]: VG ceph_vg2 finished
Dec  3 16:14:35 np0005544708 beautiful_hoover[118614]: {}
Dec  3 16:14:35 np0005544708 systemd[1]: libpod-933979592f23e32b26f05f64a5a270b81385af41d3118f7559e7dd6e83c10cdb.scope: Deactivated successfully.
Dec  3 16:14:35 np0005544708 systemd[1]: libpod-933979592f23e32b26f05f64a5a270b81385af41d3118f7559e7dd6e83c10cdb.scope: Consumed 1.303s CPU time.
Dec  3 16:14:35 np0005544708 podman[118569]: 2025-12-03 21:14:35.440121931 +0000 UTC m=+0.984422124 container died 933979592f23e32b26f05f64a5a270b81385af41d3118f7559e7dd6e83c10cdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hoover, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec  3 16:14:35 np0005544708 systemd[1]: var-lib-containers-storage-overlay-e28c14796f19487aa2876bae9f6b99411d0208e4b0c6dec01f7f3b30813e787e-merged.mount: Deactivated successfully.
Dec  3 16:14:35 np0005544708 python3.9[118842]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:14:35 np0005544708 podman[118569]: 2025-12-03 21:14:35.484884289 +0000 UTC m=+1.029184462 container remove 933979592f23e32b26f05f64a5a270b81385af41d3118f7559e7dd6e83c10cdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hoover, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:14:35 np0005544708 systemd[1]: libpod-conmon-933979592f23e32b26f05f64a5a270b81385af41d3118f7559e7dd6e83c10cdb.scope: Deactivated successfully.
Dec  3 16:14:35 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:14:35 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:14:35 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:14:35 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:14:36 np0005544708 python3.9[119040]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:14:36 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:14:36 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:14:36 np0005544708 python3.9[119118]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:14:37 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v241: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:14:37 np0005544708 python3.9[119270]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:14:37 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:14:38 np0005544708 python3.9[119348]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:14:38 np0005544708 python3.9[119500]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:14:39 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v242: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:14:39 np0005544708 python3.9[119655]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:14:40 np0005544708 python3.9[119807]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:14:41 np0005544708 python3.9[119959]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:14:41 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v243: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:14:42 np0005544708 python3.9[120111]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  3 16:14:42 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:14:42 np0005544708 python3.9[120263]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  3 16:14:43 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v244: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:14:43 np0005544708 systemd[1]: session-39.scope: Deactivated successfully.
Dec  3 16:14:43 np0005544708 systemd[1]: session-39.scope: Consumed 32.519s CPU time.
Dec  3 16:14:43 np0005544708 systemd-logind[787]: Session 39 logged out. Waiting for processes to exit.
Dec  3 16:14:43 np0005544708 systemd-logind[787]: Removed session 39.
Dec  3 16:14:45 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v245: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:14:47 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v246: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:14:47 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:14:49 np0005544708 systemd-logind[787]: New session 40 of user zuul.
Dec  3 16:14:49 np0005544708 systemd[1]: Started Session 40 of User zuul.
Dec  3 16:14:49 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v247: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:14:49 np0005544708 python3.9[120444]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec  3 16:14:50 np0005544708 python3.9[120596]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:14:51 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v248: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:14:51 np0005544708 python3.9[120750]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Dec  3 16:14:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:14:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:14:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:14:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:14:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:14:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:14:52 np0005544708 python3.9[120902]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.1obbp56s follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:14:52 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:14:53 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v249: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:14:53 np0005544708 python3.9[121027]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.1obbp56s mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764796491.89249-44-159480534895363/.source.1obbp56s _original_basename=.kguit_0n follow=False checksum=61f24f0023f825930bc81e128bb40f917e4e4dde backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:14:54 np0005544708 python3.9[121179]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 16:14:55 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v250: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:14:55 np0005544708 python3.9[121331]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/QcnFnE07R2H02WXa+53W3W+nwsFsC4YoQpDZUgxEwlg4f2zQf8fQIG23b5h9N8ej11I+FwfST4eb14wdXsFBAm6rVbCzkwQOmaDc1DdRfSmSFzwYKgqnejjeunc7W9ASRY8ZFAX/dexoruuzsoDFSnT/YK2DiUDLCoWmwO4mZ946GvsVF6yCywprEQo/oFdVyYbYBvGnl2hb9O06ePH8wQRx2BT7GKvzyv0j8Dz3LjXOzrd+jB7UlvodWIaHPlQhq/S/ZDfA640mfL7TSk/VRKvnWyi4m3+Gbj0A92cO36Objq1V2W1DPen5Nzv5CbZRHNjBvVR9G0jGLdsP8sWtUhe2qfiLZlAx0Cn0ZIhzPbS2Ij3lgp1Otug1NK15JYpiz9z0JO+UgfdZ9ht6yAYnsMcQ4OaFvKqWmsOxrx76BJ8s3hQuBMrZL+YgtbDswJVFn9/ay22MQ+ntCLeQL6GPb6WQJGnnWYqSlUX3e8wBllkbHrFK1/iyfqWjrHwteK8=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINtkoZCmFpb3z8TzbldoOvjALaFBxUWmFrtA4oHE040r#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAzVjP1T+0nWOYuc0KdOyqtmhcGoQseIckbkxVi0stL4dfIoBsNFyujIS49nno21BKZJb6EV/fwil4CuPgbMlGg=#012 create=True mode=0644 path=/tmp/ansible.1obbp56s state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:14:55 np0005544708 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  3 16:14:56 np0005544708 python3.9[121485]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.1obbp56s' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:14:57 np0005544708 python3.9[121639]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.1obbp56s state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:14:57 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v251: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:14:57 np0005544708 systemd[1]: session-40.scope: Deactivated successfully.
Dec  3 16:14:57 np0005544708 systemd[1]: session-40.scope: Consumed 5.610s CPU time.
Dec  3 16:14:57 np0005544708 systemd-logind[787]: Session 40 logged out. Waiting for processes to exit.
Dec  3 16:14:57 np0005544708 systemd-logind[787]: Removed session 40.
Dec  3 16:14:57 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:14:59 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v252: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:15:01 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v253: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:15:02 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:15:03 np0005544708 systemd-logind[787]: New session 41 of user zuul.
Dec  3 16:15:03 np0005544708 systemd[1]: Started Session 41 of User zuul.
Dec  3 16:15:03 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v254: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:15:04 np0005544708 python3.9[121817]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 16:15:05 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v255: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:15:05 np0005544708 python3.9[121973]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  3 16:15:06 np0005544708 python3.9[122127]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 16:15:07 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v256: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:15:07 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:15:07 np0005544708 python3.9[122280]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:15:08 np0005544708 python3.9[122433]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:15:09 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v257: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:15:09 np0005544708 python3.9[122585]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:15:09 np0005544708 systemd[1]: session-41.scope: Deactivated successfully.
Dec  3 16:15:09 np0005544708 systemd[1]: session-41.scope: Consumed 4.352s CPU time.
Dec  3 16:15:09 np0005544708 systemd-logind[787]: Session 41 logged out. Waiting for processes to exit.
Dec  3 16:15:09 np0005544708 systemd-logind[787]: Removed session 41.
Dec  3 16:15:10 np0005544708 systemd[1]: session-17.scope: Deactivated successfully.
Dec  3 16:15:10 np0005544708 systemd[1]: session-17.scope: Consumed 1min 56.543s CPU time.
Dec  3 16:15:10 np0005544708 systemd-logind[787]: Session 17 logged out. Waiting for processes to exit.
Dec  3 16:15:10 np0005544708 systemd-logind[787]: Removed session 17.
Dec  3 16:15:11 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v258: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:15:12 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:15:13 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v259: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:15:15 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v260: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:15:15 np0005544708 systemd-logind[787]: New session 42 of user zuul.
Dec  3 16:15:15 np0005544708 systemd[1]: Started Session 42 of User zuul.
Dec  3 16:15:16 np0005544708 python3.9[122764]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 16:15:17 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v261: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:15:17 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:15:17 np0005544708 python3.9[122920]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 16:15:18 np0005544708 python3.9[123004]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  3 16:15:19 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v262: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:15:20 np0005544708 python3.9[123155]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:15:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Optimize plan auto_2025-12-03_21:15:21
Dec  3 16:15:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 16:15:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] do_upmap
Dec  3 16:15:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'images', 'volumes', '.mgr', 'vms', 'cephfs.cephfs.data']
Dec  3 16:15:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] prepared 0/10 upmap changes
Dec  3 16:15:21 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v263: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:15:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:15:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:15:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:15:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:15:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:15:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:15:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 16:15:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 16:15:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:15:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:15:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:15:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:15:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:15:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:15:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:15:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:15:22 np0005544708 python3.9[123306]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  3 16:15:22 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:15:23 np0005544708 python3.9[123456]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:15:23 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v264: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:15:23 np0005544708 python3.9[123606]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:15:24 np0005544708 systemd[1]: session-42.scope: Deactivated successfully.
Dec  3 16:15:24 np0005544708 systemd[1]: session-42.scope: Consumed 6.358s CPU time.
Dec  3 16:15:24 np0005544708 systemd-logind[787]: Session 42 logged out. Waiting for processes to exit.
Dec  3 16:15:24 np0005544708 systemd-logind[787]: Removed session 42.
Dec  3 16:15:25 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v265: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:15:27 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v266: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:15:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 16:15:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:15:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 16:15:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:15:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:15:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:15:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:15:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:15:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:15:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:15:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:15:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:15:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.7344613959254126e-06 of space, bias 4.0, pg target 0.002081353675110495 quantized to 16 (current 16)
Dec  3 16:15:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:15:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:15:27 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:15:29 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v267: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:15:29 np0005544708 systemd-logind[787]: New session 43 of user zuul.
Dec  3 16:15:29 np0005544708 systemd[1]: Started Session 43 of User zuul.
Dec  3 16:15:30 np0005544708 python3.9[123784]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 16:15:31 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v268: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:15:32 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:15:32 np0005544708 python3.9[123940]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:15:33 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v269: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:15:33 np0005544708 python3.9[124092]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:15:34 np0005544708 python3.9[124244]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:15:35 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v270: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:15:35 np0005544708 python3.9[124367]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764796533.8330057-65-231674319160608/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=2c2471fe99a406a3fe04a82d1be92e23b4efda72 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:15:36 np0005544708 python3.9[124569]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:15:36 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:15:36 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:15:36 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  3 16:15:36 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:15:36 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  3 16:15:36 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:15:36 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  3 16:15:36 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  3 16:15:36 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  3 16:15:36 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:15:36 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:15:36 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:15:36 np0005544708 python3.9[124772]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764796535.4772625-65-86842222149640/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=d3c60da42303b433c739f24f71a523db16d56769 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:15:36 np0005544708 podman[124812]: 2025-12-03 21:15:36.942275152 +0000 UTC m=+0.055458590 container create 91e7d6bb16744d80246dbd2f0aaa8c1c97fb66c14386e3ba3915c86cdc0bab39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mahavira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True)
Dec  3 16:15:36 np0005544708 systemd[1]: Started libpod-conmon-91e7d6bb16744d80246dbd2f0aaa8c1c97fb66c14386e3ba3915c86cdc0bab39.scope.
Dec  3 16:15:37 np0005544708 podman[124812]: 2025-12-03 21:15:36.91203267 +0000 UTC m=+0.025216178 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:15:37 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:15:37 np0005544708 podman[124812]: 2025-12-03 21:15:37.030285694 +0000 UTC m=+0.143469212 container init 91e7d6bb16744d80246dbd2f0aaa8c1c97fb66c14386e3ba3915c86cdc0bab39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec  3 16:15:37 np0005544708 podman[124812]: 2025-12-03 21:15:37.043623472 +0000 UTC m=+0.156806960 container start 91e7d6bb16744d80246dbd2f0aaa8c1c97fb66c14386e3ba3915c86cdc0bab39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec  3 16:15:37 np0005544708 podman[124812]: 2025-12-03 21:15:37.04764518 +0000 UTC m=+0.160828738 container attach 91e7d6bb16744d80246dbd2f0aaa8c1c97fb66c14386e3ba3915c86cdc0bab39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mahavira, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:15:37 np0005544708 brave_mahavira[124852]: 167 167
Dec  3 16:15:37 np0005544708 systemd[1]: libpod-91e7d6bb16744d80246dbd2f0aaa8c1c97fb66c14386e3ba3915c86cdc0bab39.scope: Deactivated successfully.
Dec  3 16:15:37 np0005544708 conmon[124852]: conmon 91e7d6bb16744d80246d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-91e7d6bb16744d80246dbd2f0aaa8c1c97fb66c14386e3ba3915c86cdc0bab39.scope/container/memory.events
Dec  3 16:15:37 np0005544708 podman[124812]: 2025-12-03 21:15:37.052531171 +0000 UTC m=+0.165714649 container died 91e7d6bb16744d80246dbd2f0aaa8c1c97fb66c14386e3ba3915c86cdc0bab39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mahavira, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec  3 16:15:37 np0005544708 systemd[1]: var-lib-containers-storage-overlay-95563d99443bc586449cfe41b03db0356367d4cd536e3ca9e6d3cadb683f04b7-merged.mount: Deactivated successfully.
Dec  3 16:15:37 np0005544708 podman[124812]: 2025-12-03 21:15:37.104747193 +0000 UTC m=+0.217930641 container remove 91e7d6bb16744d80246dbd2f0aaa8c1c97fb66c14386e3ba3915c86cdc0bab39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:15:37 np0005544708 systemd[1]: libpod-conmon-91e7d6bb16744d80246dbd2f0aaa8c1c97fb66c14386e3ba3915c86cdc0bab39.scope: Deactivated successfully.
Dec  3 16:15:37 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v271: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:15:37 np0005544708 podman[124953]: 2025-12-03 21:15:37.336128444 +0000 UTC m=+0.077400908 container create 3348b6cd8aab5da81a23dbfbeddaeafe45d245be0466ce20c35010d99b6f5e25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_swartz, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec  3 16:15:37 np0005544708 systemd[1]: Started libpod-conmon-3348b6cd8aab5da81a23dbfbeddaeafe45d245be0466ce20c35010d99b6f5e25.scope.
Dec  3 16:15:37 np0005544708 podman[124953]: 2025-12-03 21:15:37.304897676 +0000 UTC m=+0.046170210 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:15:37 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:15:37 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/997a270cff7cf78e8b5acec0a7321068277c39370ee088e001ea2c0e6da60240/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:15:37 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/997a270cff7cf78e8b5acec0a7321068277c39370ee088e001ea2c0e6da60240/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:15:37 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/997a270cff7cf78e8b5acec0a7321068277c39370ee088e001ea2c0e6da60240/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:15:37 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/997a270cff7cf78e8b5acec0a7321068277c39370ee088e001ea2c0e6da60240/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:15:37 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/997a270cff7cf78e8b5acec0a7321068277c39370ee088e001ea2c0e6da60240/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:15:37 np0005544708 podman[124953]: 2025-12-03 21:15:37.451123291 +0000 UTC m=+0.192395845 container init 3348b6cd8aab5da81a23dbfbeddaeafe45d245be0466ce20c35010d99b6f5e25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec  3 16:15:37 np0005544708 podman[124953]: 2025-12-03 21:15:37.462905997 +0000 UTC m=+0.204178491 container start 3348b6cd8aab5da81a23dbfbeddaeafe45d245be0466ce20c35010d99b6f5e25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:15:37 np0005544708 podman[124953]: 2025-12-03 21:15:37.467108461 +0000 UTC m=+0.208381015 container attach 3348b6cd8aab5da81a23dbfbeddaeafe45d245be0466ce20c35010d99b6f5e25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_swartz, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:15:37 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:15:37 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:15:37 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:15:37 np0005544708 python3.9[124995]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:15:37 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:15:38 np0005544708 vigorous_swartz[124998]: --> passed data devices: 0 physical, 3 LVM
Dec  3 16:15:38 np0005544708 vigorous_swartz[124998]: --> All data devices are unavailable
Dec  3 16:15:38 np0005544708 systemd[1]: libpod-3348b6cd8aab5da81a23dbfbeddaeafe45d245be0466ce20c35010d99b6f5e25.scope: Deactivated successfully.
Dec  3 16:15:38 np0005544708 podman[124953]: 2025-12-03 21:15:38.048085987 +0000 UTC m=+0.789358491 container died 3348b6cd8aab5da81a23dbfbeddaeafe45d245be0466ce20c35010d99b6f5e25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_swartz, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec  3 16:15:38 np0005544708 systemd[1]: var-lib-containers-storage-overlay-997a270cff7cf78e8b5acec0a7321068277c39370ee088e001ea2c0e6da60240-merged.mount: Deactivated successfully.
Dec  3 16:15:38 np0005544708 podman[124953]: 2025-12-03 21:15:38.102697173 +0000 UTC m=+0.843969627 container remove 3348b6cd8aab5da81a23dbfbeddaeafe45d245be0466ce20c35010d99b6f5e25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:15:38 np0005544708 systemd[1]: libpod-conmon-3348b6cd8aab5da81a23dbfbeddaeafe45d245be0466ce20c35010d99b6f5e25.scope: Deactivated successfully.
Dec  3 16:15:38 np0005544708 python3.9[125137]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764796536.9543972-65-205376886054820/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=149563241073772b4070777f145b125561375852 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:15:38 np0005544708 podman[125241]: 2025-12-03 21:15:38.538105872 +0000 UTC m=+0.042558763 container create 6136abf1c22dc9769745260ff6a2a8d41c2f55e6df85026c9e60a1023fc7503b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_bassi, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:15:38 np0005544708 systemd[1]: Started libpod-conmon-6136abf1c22dc9769745260ff6a2a8d41c2f55e6df85026c9e60a1023fc7503b.scope.
Dec  3 16:15:38 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:15:38 np0005544708 podman[125241]: 2025-12-03 21:15:38.520584872 +0000 UTC m=+0.025037783 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:15:38 np0005544708 podman[125241]: 2025-12-03 21:15:38.622926889 +0000 UTC m=+0.127379800 container init 6136abf1c22dc9769745260ff6a2a8d41c2f55e6df85026c9e60a1023fc7503b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:15:38 np0005544708 podman[125241]: 2025-12-03 21:15:38.631719896 +0000 UTC m=+0.136172787 container start 6136abf1c22dc9769745260ff6a2a8d41c2f55e6df85026c9e60a1023fc7503b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:15:38 np0005544708 podman[125241]: 2025-12-03 21:15:38.63561985 +0000 UTC m=+0.140072761 container attach 6136abf1c22dc9769745260ff6a2a8d41c2f55e6df85026c9e60a1023fc7503b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_bassi, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:15:38 np0005544708 jovial_bassi[125300]: 167 167
Dec  3 16:15:38 np0005544708 systemd[1]: libpod-6136abf1c22dc9769745260ff6a2a8d41c2f55e6df85026c9e60a1023fc7503b.scope: Deactivated successfully.
Dec  3 16:15:38 np0005544708 podman[125241]: 2025-12-03 21:15:38.639995608 +0000 UTC m=+0.144448539 container died 6136abf1c22dc9769745260ff6a2a8d41c2f55e6df85026c9e60a1023fc7503b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:15:38 np0005544708 systemd[1]: var-lib-containers-storage-overlay-a2590fce862e7648da334fc723722f2c11be61da9d529b42d05bb95821c0f85f-merged.mount: Deactivated successfully.
Dec  3 16:15:38 np0005544708 podman[125241]: 2025-12-03 21:15:38.681886532 +0000 UTC m=+0.186339423 container remove 6136abf1c22dc9769745260ff6a2a8d41c2f55e6df85026c9e60a1023fc7503b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec  3 16:15:38 np0005544708 systemd[1]: libpod-conmon-6136abf1c22dc9769745260ff6a2a8d41c2f55e6df85026c9e60a1023fc7503b.scope: Deactivated successfully.
Dec  3 16:15:38 np0005544708 podman[125400]: 2025-12-03 21:15:38.879047365 +0000 UTC m=+0.049094169 container create 04aa35afadbc36990f9df35a2e210933e48acaaf709b606f95a4c8a6f1b25f1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mayer, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec  3 16:15:38 np0005544708 systemd[1]: Started libpod-conmon-04aa35afadbc36990f9df35a2e210933e48acaaf709b606f95a4c8a6f1b25f1c.scope.
Dec  3 16:15:38 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:15:38 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f37ff4e5fcfac082b753a855aeb449ebe375960c9769a9f0d135a55f0115b6a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:15:38 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f37ff4e5fcfac082b753a855aeb449ebe375960c9769a9f0d135a55f0115b6a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:15:38 np0005544708 podman[125400]: 2025-12-03 21:15:38.859788868 +0000 UTC m=+0.029835702 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:15:38 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f37ff4e5fcfac082b753a855aeb449ebe375960c9769a9f0d135a55f0115b6a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:15:38 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f37ff4e5fcfac082b753a855aeb449ebe375960c9769a9f0d135a55f0115b6a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:15:38 np0005544708 podman[125400]: 2025-12-03 21:15:38.965878566 +0000 UTC m=+0.135925370 container init 04aa35afadbc36990f9df35a2e210933e48acaaf709b606f95a4c8a6f1b25f1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mayer, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec  3 16:15:38 np0005544708 podman[125400]: 2025-12-03 21:15:38.972108853 +0000 UTC m=+0.142155677 container start 04aa35afadbc36990f9df35a2e210933e48acaaf709b606f95a4c8a6f1b25f1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mayer, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 16:15:38 np0005544708 podman[125400]: 2025-12-03 21:15:38.975783971 +0000 UTC m=+0.145830775 container attach 04aa35afadbc36990f9df35a2e210933e48acaaf709b606f95a4c8a6f1b25f1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mayer, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:15:39 np0005544708 python3.9[125418]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:15:39 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v272: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:15:39 np0005544708 cool_mayer[125425]: {
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:    "0": [
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:        {
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:            "devices": [
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "/dev/loop3"
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:            ],
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:            "lv_name": "ceph_lv0",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:            "lv_size": "21470642176",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d33bc95-baf8-481d-bc78-3b15ffd29872,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:            "lv_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:            "name": "ceph_lv0",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:            "tags": {
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.block_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.cluster_name": "ceph",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.crush_device_class": "",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.encrypted": "0",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.objectstore": "bluestore",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.osd_fsid": "4d33bc95-baf8-481d-bc78-3b15ffd29872",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.osd_id": "0",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.type": "block",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.vdo": "0",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.with_tpm": "0"
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:            },
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:            "type": "block",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:            "vg_name": "ceph_vg0"
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:        }
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:    ],
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:    "1": [
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:        {
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:            "devices": [
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "/dev/loop4"
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:            ],
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:            "lv_name": "ceph_lv1",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:            "lv_size": "21470642176",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c4086f1b-ff53-4e63-8dc0-011238d77976,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:            "lv_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:            "name": "ceph_lv1",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:            "tags": {
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.block_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.cluster_name": "ceph",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.crush_device_class": "",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.encrypted": "0",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.objectstore": "bluestore",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.osd_fsid": "c4086f1b-ff53-4e63-8dc0-011238d77976",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.osd_id": "1",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.type": "block",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.vdo": "0",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.with_tpm": "0"
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:            },
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:            "type": "block",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:            "vg_name": "ceph_vg1"
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:        }
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:    ],
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:    "2": [
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:        {
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:            "devices": [
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "/dev/loop5"
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:            ],
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:            "lv_name": "ceph_lv2",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:            "lv_size": "21470642176",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=abcd6a67-9013-4470-978f-f75da5f33cd4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:            "lv_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:            "name": "ceph_lv2",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:            "tags": {
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.block_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.cluster_name": "ceph",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.crush_device_class": "",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.encrypted": "0",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.objectstore": "bluestore",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.osd_fsid": "abcd6a67-9013-4470-978f-f75da5f33cd4",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.osd_id": "2",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.type": "block",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.vdo": "0",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:                "ceph.with_tpm": "0"
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:            },
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:            "type": "block",
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:            "vg_name": "ceph_vg2"
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:        }
Dec  3 16:15:39 np0005544708 cool_mayer[125425]:    ]
Dec  3 16:15:39 np0005544708 cool_mayer[125425]: }
Dec  3 16:15:39 np0005544708 systemd[1]: libpod-04aa35afadbc36990f9df35a2e210933e48acaaf709b606f95a4c8a6f1b25f1c.scope: Deactivated successfully.
Dec  3 16:15:39 np0005544708 podman[125400]: 2025-12-03 21:15:39.369210104 +0000 UTC m=+0.539256918 container died 04aa35afadbc36990f9df35a2e210933e48acaaf709b606f95a4c8a6f1b25f1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mayer, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:15:39 np0005544708 systemd[1]: var-lib-containers-storage-overlay-f37ff4e5fcfac082b753a855aeb449ebe375960c9769a9f0d135a55f0115b6a1-merged.mount: Deactivated successfully.
Dec  3 16:15:39 np0005544708 podman[125400]: 2025-12-03 21:15:39.42383247 +0000 UTC m=+0.593879284 container remove 04aa35afadbc36990f9df35a2e210933e48acaaf709b606f95a4c8a6f1b25f1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mayer, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec  3 16:15:39 np0005544708 systemd[1]: libpod-conmon-04aa35afadbc36990f9df35a2e210933e48acaaf709b606f95a4c8a6f1b25f1c.scope: Deactivated successfully.
Dec  3 16:15:39 np0005544708 python3.9[125647]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:15:39 np0005544708 podman[125660]: 2025-12-03 21:15:39.899684714 +0000 UTC m=+0.057607107 container create d8c021a5ad2b3e4983117ec65f60ab8e08aa02dbbde5eea37e8afc2b5bc0bc8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lamport, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:15:39 np0005544708 systemd[1]: Started libpod-conmon-d8c021a5ad2b3e4983117ec65f60ab8e08aa02dbbde5eea37e8afc2b5bc0bc8e.scope.
Dec  3 16:15:39 np0005544708 podman[125660]: 2025-12-03 21:15:39.871831286 +0000 UTC m=+0.029753759 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:15:39 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:15:39 np0005544708 podman[125660]: 2025-12-03 21:15:39.986929406 +0000 UTC m=+0.144851839 container init d8c021a5ad2b3e4983117ec65f60ab8e08aa02dbbde5eea37e8afc2b5bc0bc8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lamport, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:15:39 np0005544708 podman[125660]: 2025-12-03 21:15:39.999287799 +0000 UTC m=+0.157210192 container start d8c021a5ad2b3e4983117ec65f60ab8e08aa02dbbde5eea37e8afc2b5bc0bc8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lamport, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:15:40 np0005544708 kind_lamport[125700]: 167 167
Dec  3 16:15:40 np0005544708 systemd[1]: libpod-d8c021a5ad2b3e4983117ec65f60ab8e08aa02dbbde5eea37e8afc2b5bc0bc8e.scope: Deactivated successfully.
Dec  3 16:15:40 np0005544708 conmon[125700]: conmon d8c021a5ad2b3e498311 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d8c021a5ad2b3e4983117ec65f60ab8e08aa02dbbde5eea37e8afc2b5bc0bc8e.scope/container/memory.events
Dec  3 16:15:40 np0005544708 podman[125660]: 2025-12-03 21:15:40.010851219 +0000 UTC m=+0.168773622 container attach d8c021a5ad2b3e4983117ec65f60ab8e08aa02dbbde5eea37e8afc2b5bc0bc8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lamport, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:15:40 np0005544708 podman[125660]: 2025-12-03 21:15:40.011380033 +0000 UTC m=+0.169302436 container died d8c021a5ad2b3e4983117ec65f60ab8e08aa02dbbde5eea37e8afc2b5bc0bc8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:15:40 np0005544708 systemd[1]: var-lib-containers-storage-overlay-63be4198155103ba1dc972305f67f215b2e5f88fd367bb6fdefe9dae06959d3c-merged.mount: Deactivated successfully.
Dec  3 16:15:40 np0005544708 podman[125660]: 2025-12-03 21:15:40.087777824 +0000 UTC m=+0.245700207 container remove d8c021a5ad2b3e4983117ec65f60ab8e08aa02dbbde5eea37e8afc2b5bc0bc8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lamport, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:15:40 np0005544708 systemd[1]: libpod-conmon-d8c021a5ad2b3e4983117ec65f60ab8e08aa02dbbde5eea37e8afc2b5bc0bc8e.scope: Deactivated successfully.
Dec  3 16:15:40 np0005544708 podman[125799]: 2025-12-03 21:15:40.303116885 +0000 UTC m=+0.046294024 container create c425d3efbe04513964865b44ba1df0cf5bb09cda433718d18280a3ad10ba8109 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:15:40 np0005544708 systemd[1]: Started libpod-conmon-c425d3efbe04513964865b44ba1df0cf5bb09cda433718d18280a3ad10ba8109.scope.
Dec  3 16:15:40 np0005544708 podman[125799]: 2025-12-03 21:15:40.284378062 +0000 UTC m=+0.027555211 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:15:40 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:15:40 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eeee5ec26d524cb3dfe64e0dd2c0a3abdaddab49540a404bef194622e228737/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:15:40 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eeee5ec26d524cb3dfe64e0dd2c0a3abdaddab49540a404bef194622e228737/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:15:40 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eeee5ec26d524cb3dfe64e0dd2c0a3abdaddab49540a404bef194622e228737/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:15:40 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eeee5ec26d524cb3dfe64e0dd2c0a3abdaddab49540a404bef194622e228737/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:15:40 np0005544708 podman[125799]: 2025-12-03 21:15:40.472275366 +0000 UTC m=+0.215452545 container init c425d3efbe04513964865b44ba1df0cf5bb09cda433718d18280a3ad10ba8109 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_gould, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  3 16:15:40 np0005544708 podman[125799]: 2025-12-03 21:15:40.482182992 +0000 UTC m=+0.225360141 container start c425d3efbe04513964865b44ba1df0cf5bb09cda433718d18280a3ad10ba8109 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_gould, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:15:40 np0005544708 podman[125799]: 2025-12-03 21:15:40.485990805 +0000 UTC m=+0.229168014 container attach c425d3efbe04513964865b44ba1df0cf5bb09cda433718d18280a3ad10ba8109 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_gould, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec  3 16:15:40 np0005544708 python3.9[125870]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:15:41 np0005544708 python3.9[126043]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764796540.036237-124-8675484940645/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=bec64515721404c046714e3f0c8a7dec942f55c1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:15:41 np0005544708 lvm[126070]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:15:41 np0005544708 lvm[126070]: VG ceph_vg1 finished
Dec  3 16:15:41 np0005544708 lvm[126069]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:15:41 np0005544708 lvm[126069]: VG ceph_vg0 finished
Dec  3 16:15:41 np0005544708 lvm[126073]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:15:41 np0005544708 lvm[126073]: VG ceph_vg2 finished
Dec  3 16:15:41 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v273: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:15:41 np0005544708 awesome_gould[125863]: {}
Dec  3 16:15:41 np0005544708 systemd[1]: libpod-c425d3efbe04513964865b44ba1df0cf5bb09cda433718d18280a3ad10ba8109.scope: Deactivated successfully.
Dec  3 16:15:41 np0005544708 podman[125799]: 2025-12-03 21:15:41.331018379 +0000 UTC m=+1.074195558 container died c425d3efbe04513964865b44ba1df0cf5bb09cda433718d18280a3ad10ba8109 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:15:41 np0005544708 systemd[1]: libpod-c425d3efbe04513964865b44ba1df0cf5bb09cda433718d18280a3ad10ba8109.scope: Consumed 1.354s CPU time.
Dec  3 16:15:41 np0005544708 systemd[1]: var-lib-containers-storage-overlay-4eeee5ec26d524cb3dfe64e0dd2c0a3abdaddab49540a404bef194622e228737-merged.mount: Deactivated successfully.
Dec  3 16:15:41 np0005544708 podman[125799]: 2025-12-03 21:15:41.387251559 +0000 UTC m=+1.130428708 container remove c425d3efbe04513964865b44ba1df0cf5bb09cda433718d18280a3ad10ba8109 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 16:15:41 np0005544708 systemd[1]: libpod-conmon-c425d3efbe04513964865b44ba1df0cf5bb09cda433718d18280a3ad10ba8109.scope: Deactivated successfully.
Dec  3 16:15:41 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:15:41 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:15:41 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:15:41 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:15:41 np0005544708 python3.9[126264]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:15:42 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:15:42 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:15:42 np0005544708 python3.9[126387]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764796541.3217268-124-98192508430848/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=8344db67344de20b5740c0f08184ccf2d3f2112a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:15:42 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:15:43 np0005544708 python3.9[126539]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:15:43 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v274: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:15:44 np0005544708 python3.9[126662]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764796542.7574608-124-170406248105144/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=ee36923c9157e342b04e55ed72f2c679eadd113c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:15:44 np0005544708 python3.9[126814]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:15:45 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v275: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:15:45 np0005544708 python3.9[126966]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:15:46 np0005544708 python3.9[127118]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:15:46 np0005544708 python3.9[127241]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764796545.740133-183-71369721065533/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=a766d8357ea3590ae1b89bf19947a192ebb63bce backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:15:47 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v276: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:15:47 np0005544708 python3.9[127393]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:15:47 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:15:48 np0005544708 python3.9[127516]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764796547.1255841-183-165062591605828/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=8344db67344de20b5740c0f08184ccf2d3f2112a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:15:48 np0005544708 python3.9[127668]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:15:49 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v277: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:15:49 np0005544708 python3.9[127791]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764796548.404328-183-234449418829706/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=ef7e5e8bbf0cfdcf910f97bed4ab276d0d1fbcac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:15:50 np0005544708 python3.9[127943]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:15:51 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v278: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:15:51 np0005544708 python3.9[128095]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:15:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:15:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:15:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:15:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:15:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:15:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:15:52 np0005544708 python3.9[128218]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764796551.014292-251-179347464787912/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=b6113f6c3c3112d11c0348cd0a11619cc2e5f10c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:15:52 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:15:52 np0005544708 python3.9[128370]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:15:53 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v279: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:15:53 np0005544708 python3.9[128522]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:15:54 np0005544708 python3.9[128645]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764796553.172751-275-110786301521603/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=b6113f6c3c3112d11c0348cd0a11619cc2e5f10c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:15:55 np0005544708 python3.9[128797]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:15:55 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v280: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:15:55 np0005544708 python3.9[128949]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:15:56 np0005544708 python3.9[129072]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764796555.3093896-299-74204046778787/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=b6113f6c3c3112d11c0348cd0a11619cc2e5f10c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:15:56 np0005544708 python3.9[129224]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:15:57 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v281: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:15:57 np0005544708 python3.9[129376]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:15:57 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:15:58 np0005544708 python3.9[129499]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764796557.14238-323-143196744162433/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=b6113f6c3c3112d11c0348cd0a11619cc2e5f10c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:15:58 np0005544708 python3.9[129651]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:15:59 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v282: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:15:59 np0005544708 python3.9[129803]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:16:00 np0005544708 python3.9[129926]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764796558.9222195-347-56648605168673/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=b6113f6c3c3112d11c0348cd0a11619cc2e5f10c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:16:00 np0005544708 ceph-mon[75204]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Dec  3 16:16:00 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:16:00.435135) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 16:16:00 np0005544708 ceph-mon[75204]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Dec  3 16:16:00 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796560435255, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 6622, "num_deletes": 251, "total_data_size": 7659381, "memory_usage": 7795256, "flush_reason": "Manual Compaction"}
Dec  3 16:16:00 np0005544708 ceph-mon[75204]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Dec  3 16:16:00 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796560485687, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 5723224, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 146, "largest_seqno": 6765, "table_properties": {"data_size": 5699882, "index_size": 15164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7109, "raw_key_size": 62775, "raw_average_key_size": 22, "raw_value_size": 5645795, "raw_average_value_size": 1993, "num_data_blocks": 680, "num_entries": 2832, "num_filter_entries": 2832, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764796081, "oldest_key_time": 1764796081, "file_creation_time": 1764796560, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d83d641b-0db7-44b5-9540-349f4c36f664", "db_session_id": "YRQHTOJ9E78VAMDNI6U1", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Dec  3 16:16:00 np0005544708 ceph-mon[75204]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 50607 microseconds, and 15131 cpu microseconds.
Dec  3 16:16:00 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:16:00.485746) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 5723224 bytes OK
Dec  3 16:16:00 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:16:00.485772) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Dec  3 16:16:00 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:16:00.488358) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Dec  3 16:16:00 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:16:00.488377) EVENT_LOG_v1 {"time_micros": 1764796560488372, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Dec  3 16:16:00 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:16:00.488406) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Dec  3 16:16:00 np0005544708 ceph-mon[75204]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 7631436, prev total WAL file size 7631436, number of live WAL files 2.
Dec  3 16:16:00 np0005544708 ceph-mon[75204]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 16:16:00 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:16:00.490333) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Dec  3 16:16:00 np0005544708 ceph-mon[75204]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Dec  3 16:16:00 np0005544708 ceph-mon[75204]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(5589KB) 13(58KB) 8(1944B)]
Dec  3 16:16:00 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796560490452, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 5785128, "oldest_snapshot_seqno": -1}
Dec  3 16:16:00 np0005544708 ceph-mon[75204]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 2658 keys, 5738140 bytes, temperature: kUnknown
Dec  3 16:16:00 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796560531962, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 5738140, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5715125, "index_size": 15290, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6661, "raw_key_size": 61215, "raw_average_key_size": 23, "raw_value_size": 5662315, "raw_average_value_size": 2130, "num_data_blocks": 686, "num_entries": 2658, "num_filter_entries": 2658, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764796079, "oldest_key_time": 0, "file_creation_time": 1764796560, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d83d641b-0db7-44b5-9540-349f4c36f664", "db_session_id": "YRQHTOJ9E78VAMDNI6U1", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Dec  3 16:16:00 np0005544708 ceph-mon[75204]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 16:16:00 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:16:00.532320) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 5738140 bytes
Dec  3 16:16:00 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:16:00.533922) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.7 rd, 137.5 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(5.5, 0.0 +0.0 blob) out(5.5 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 2947, records dropped: 289 output_compression: NoCompression
Dec  3 16:16:00 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:16:00.533945) EVENT_LOG_v1 {"time_micros": 1764796560533934, "job": 4, "event": "compaction_finished", "compaction_time_micros": 41717, "compaction_time_cpu_micros": 15265, "output_level": 6, "num_output_files": 1, "total_output_size": 5738140, "num_input_records": 2947, "num_output_records": 2658, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 16:16:00 np0005544708 ceph-mon[75204]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 16:16:00 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796560535865, "job": 4, "event": "table_file_deletion", "file_number": 19}
Dec  3 16:16:00 np0005544708 ceph-mon[75204]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 16:16:00 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796560536127, "job": 4, "event": "table_file_deletion", "file_number": 13}
Dec  3 16:16:00 np0005544708 ceph-mon[75204]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 16:16:00 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796560536400, "job": 4, "event": "table_file_deletion", "file_number": 8}
Dec  3 16:16:00 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:16:00.490203) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:16:00 np0005544708 python3.9[130079]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:16:01 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v283: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:16:01 np0005544708 python3.9[130231]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:16:02 np0005544708 python3.9[130354]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764796561.1422749-371-165587659249820/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=b6113f6c3c3112d11c0348cd0a11619cc2e5f10c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:16:02 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:16:02 np0005544708 systemd[1]: session-43.scope: Deactivated successfully.
Dec  3 16:16:02 np0005544708 systemd[1]: session-43.scope: Consumed 25.634s CPU time.
Dec  3 16:16:02 np0005544708 systemd-logind[787]: Session 43 logged out. Waiting for processes to exit.
Dec  3 16:16:02 np0005544708 systemd-logind[787]: Removed session 43.
Dec  3 16:16:03 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v284: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:16:05 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v285: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:16:07 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v286: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:16:07 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:16:08 np0005544708 systemd-logind[787]: New session 44 of user zuul.
Dec  3 16:16:08 np0005544708 systemd[1]: Started Session 44 of User zuul.
Dec  3 16:16:09 np0005544708 python3.9[130534]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:16:09 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v287: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:16:09 np0005544708 python3.9[130686]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:16:10 np0005544708 python3.9[130809]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764796569.2706401-34-48008386959101/.source.conf _original_basename=ceph.conf follow=False checksum=61832579ecbf8b3bbfd3eb7faf9249a287d8a08d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:16:11 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v288: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:16:11 np0005544708 python3.9[130961]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:16:12 np0005544708 python3.9[131084]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764796570.9435086-34-269877758566084/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=100907596fddba72a04e8a16770dbec161f9317a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:16:12 np0005544708 systemd[1]: session-44.scope: Deactivated successfully.
Dec  3 16:16:12 np0005544708 systemd[1]: session-44.scope: Consumed 3.059s CPU time.
Dec  3 16:16:12 np0005544708 systemd-logind[787]: Session 44 logged out. Waiting for processes to exit.
Dec  3 16:16:12 np0005544708 systemd-logind[787]: Removed session 44.
Dec  3 16:16:12 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:16:13 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v289: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:16:15 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v290: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:16:17 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v291: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:16:17 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:16:18 np0005544708 systemd-logind[787]: New session 45 of user zuul.
Dec  3 16:16:18 np0005544708 systemd[1]: Started Session 45 of User zuul.
Dec  3 16:16:19 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v292: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:16:19 np0005544708 python3.9[131264]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 16:16:21 np0005544708 python3.9[131420]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:16:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Optimize plan auto_2025-12-03_21:16:21
Dec  3 16:16:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 16:16:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] do_upmap
Dec  3 16:16:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', '.mgr', 'images', 'vms', 'cephfs.cephfs.data', 'volumes']
Dec  3 16:16:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] prepared 0/10 upmap changes
Dec  3 16:16:21 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v293: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:16:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:16:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:16:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:16:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:16:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:16:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:16:21 np0005544708 python3.9[131572]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:16:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 16:16:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:16:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 16:16:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:16:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:16:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:16:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:16:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:16:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:16:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:16:22 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:16:22 np0005544708 python3.9[131722]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 16:16:23 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v294: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:16:23 np0005544708 python3.9[131874]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec  3 16:16:25 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v295: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:16:25 np0005544708 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Dec  3 16:16:25 np0005544708 python3.9[132030]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 16:16:26 np0005544708 python3.9[132114]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 16:16:27 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v296: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:16:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 16:16:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:16:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 16:16:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:16:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:16:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:16:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:16:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:16:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:16:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:16:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:16:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:16:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.7344613959254126e-06 of space, bias 4.0, pg target 0.002081353675110495 quantized to 16 (current 16)
Dec  3 16:16:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:16:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:16:27 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:16:29 np0005544708 python3.9[132269]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  3 16:16:29 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v297: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:16:31 np0005544708 python3[132424]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Dec  3 16:16:31 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v298: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:16:32 np0005544708 python3.9[132576]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:16:32 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:16:32 np0005544708 python3.9[132728]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:16:33 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v299: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:16:33 np0005544708 python3.9[132806]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:16:34 np0005544708 python3.9[132958]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:16:34 np0005544708 python3.9[133036]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.l67wsdtv recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:16:35 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v300: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:16:35 np0005544708 python3.9[133188]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:16:36 np0005544708 python3.9[133266]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:16:37 np0005544708 python3.9[133418]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:16:37 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v301: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:16:37 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:16:37 np0005544708 python3[133571]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  3 16:16:38 np0005544708 python3.9[133723]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:16:39 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v302: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:16:39 np0005544708 python3.9[133848]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764796598.1872478-157-200988617254568/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:16:40 np0005544708 python3.9[134000]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:16:40 np0005544708 python3.9[134125]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764796599.670772-172-77219131172768/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:16:41 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v303: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:16:41 np0005544708 python3.9[134277]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:16:42 np0005544708 python3.9[134467]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764796601.014516-187-131769648164895/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:16:42 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:16:42 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:16:42 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  3 16:16:42 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:16:42 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  3 16:16:42 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:16:42 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  3 16:16:42 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  3 16:16:42 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  3 16:16:42 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:16:42 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:16:42 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:16:42 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:16:42 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:16:42 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:16:42 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:16:42 np0005544708 podman[134670]: 2025-12-03 21:16:42.770082583 +0000 UTC m=+0.057863764 container create ac1da2f7ced58262ef0e1b90c3990d6cb004173acb76740f047bf1c6d9fdf830 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:16:42 np0005544708 systemd[1]: Started libpod-conmon-ac1da2f7ced58262ef0e1b90c3990d6cb004173acb76740f047bf1c6d9fdf830.scope.
Dec  3 16:16:42 np0005544708 podman[134670]: 2025-12-03 21:16:42.741720047 +0000 UTC m=+0.029501288 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:16:42 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:16:42 np0005544708 podman[134670]: 2025-12-03 21:16:42.870136692 +0000 UTC m=+0.157917923 container init ac1da2f7ced58262ef0e1b90c3990d6cb004173acb76740f047bf1c6d9fdf830 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hypatia, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:16:42 np0005544708 podman[134670]: 2025-12-03 21:16:42.885413449 +0000 UTC m=+0.173194640 container start ac1da2f7ced58262ef0e1b90c3990d6cb004173acb76740f047bf1c6d9fdf830 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hypatia, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec  3 16:16:42 np0005544708 podman[134670]: 2025-12-03 21:16:42.889444036 +0000 UTC m=+0.177225227 container attach ac1da2f7ced58262ef0e1b90c3990d6cb004173acb76740f047bf1c6d9fdf830 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hypatia, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:16:42 np0005544708 great_hypatia[134716]: 167 167
Dec  3 16:16:42 np0005544708 systemd[1]: libpod-ac1da2f7ced58262ef0e1b90c3990d6cb004173acb76740f047bf1c6d9fdf830.scope: Deactivated successfully.
Dec  3 16:16:42 np0005544708 podman[134670]: 2025-12-03 21:16:42.896625738 +0000 UTC m=+0.184406949 container died ac1da2f7ced58262ef0e1b90c3990d6cb004173acb76740f047bf1c6d9fdf830 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hypatia, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:16:42 np0005544708 systemd[1]: var-lib-containers-storage-overlay-59bbda1054c5e6772175430c2c210aba1729cc58c86dfb6cffaad54ec419d32e-merged.mount: Deactivated successfully.
Dec  3 16:16:42 np0005544708 podman[134670]: 2025-12-03 21:16:42.954141842 +0000 UTC m=+0.241923033 container remove ac1da2f7ced58262ef0e1b90c3990d6cb004173acb76740f047bf1c6d9fdf830 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:16:42 np0005544708 python3.9[134712]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:16:42 np0005544708 systemd[1]: libpod-conmon-ac1da2f7ced58262ef0e1b90c3990d6cb004173acb76740f047bf1c6d9fdf830.scope: Deactivated successfully.
Dec  3 16:16:43 np0005544708 podman[134764]: 2025-12-03 21:16:43.191995375 +0000 UTC m=+0.082460780 container create f85d87aaf4f44bac521e531a3cb341bf47252f37c273b1b3d87cb0af9c9da278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_elbakyan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec  3 16:16:43 np0005544708 podman[134764]: 2025-12-03 21:16:43.141882489 +0000 UTC m=+0.032347984 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:16:43 np0005544708 systemd[1]: Started libpod-conmon-f85d87aaf4f44bac521e531a3cb341bf47252f37c273b1b3d87cb0af9c9da278.scope.
Dec  3 16:16:43 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:16:43 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ead3787b3bd20dc938b0e05759d06469b89b24829645bf59aad67bd3f5bba1ea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:16:43 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ead3787b3bd20dc938b0e05759d06469b89b24829645bf59aad67bd3f5bba1ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:16:43 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ead3787b3bd20dc938b0e05759d06469b89b24829645bf59aad67bd3f5bba1ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:16:43 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ead3787b3bd20dc938b0e05759d06469b89b24829645bf59aad67bd3f5bba1ea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:16:43 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ead3787b3bd20dc938b0e05759d06469b89b24829645bf59aad67bd3f5bba1ea/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:16:43 np0005544708 podman[134764]: 2025-12-03 21:16:43.307409364 +0000 UTC m=+0.197874789 container init f85d87aaf4f44bac521e531a3cb341bf47252f37c273b1b3d87cb0af9c9da278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_elbakyan, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  3 16:16:43 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v304: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:16:43 np0005544708 podman[134764]: 2025-12-03 21:16:43.325325971 +0000 UTC m=+0.215791376 container start f85d87aaf4f44bac521e531a3cb341bf47252f37c273b1b3d87cb0af9c9da278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:16:43 np0005544708 podman[134764]: 2025-12-03 21:16:43.330025347 +0000 UTC m=+0.220490772 container attach f85d87aaf4f44bac521e531a3cb341bf47252f37c273b1b3d87cb0af9c9da278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec  3 16:16:43 np0005544708 python3.9[134885]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764796602.4339132-202-30793087710172/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:16:43 np0005544708 pedantic_elbakyan[134828]: --> passed data devices: 0 physical, 3 LVM
Dec  3 16:16:43 np0005544708 pedantic_elbakyan[134828]: --> All data devices are unavailable
Dec  3 16:16:43 np0005544708 systemd[1]: libpod-f85d87aaf4f44bac521e531a3cb341bf47252f37c273b1b3d87cb0af9c9da278.scope: Deactivated successfully.
Dec  3 16:16:43 np0005544708 podman[134764]: 2025-12-03 21:16:43.929134075 +0000 UTC m=+0.819599480 container died f85d87aaf4f44bac521e531a3cb341bf47252f37c273b1b3d87cb0af9c9da278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec  3 16:16:43 np0005544708 systemd[1]: var-lib-containers-storage-overlay-ead3787b3bd20dc938b0e05759d06469b89b24829645bf59aad67bd3f5bba1ea-merged.mount: Deactivated successfully.
Dec  3 16:16:43 np0005544708 podman[134764]: 2025-12-03 21:16:43.992823923 +0000 UTC m=+0.883289338 container remove f85d87aaf4f44bac521e531a3cb341bf47252f37c273b1b3d87cb0af9c9da278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_elbakyan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:16:44 np0005544708 systemd[1]: libpod-conmon-f85d87aaf4f44bac521e531a3cb341bf47252f37c273b1b3d87cb0af9c9da278.scope: Deactivated successfully.
Dec  3 16:16:44 np0005544708 podman[135126]: 2025-12-03 21:16:44.415686291 +0000 UTC m=+0.041917659 container create fa068596cf1b3e98d1bf48b4fc9821277a7168c7f3bb309f72afd7a240e8490f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_grothendieck, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec  3 16:16:44 np0005544708 systemd[1]: Started libpod-conmon-fa068596cf1b3e98d1bf48b4fc9821277a7168c7f3bb309f72afd7a240e8490f.scope.
Dec  3 16:16:44 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:16:44 np0005544708 podman[135126]: 2025-12-03 21:16:44.396443958 +0000 UTC m=+0.022675326 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:16:44 np0005544708 podman[135126]: 2025-12-03 21:16:44.493097746 +0000 UTC m=+0.119329104 container init fa068596cf1b3e98d1bf48b4fc9821277a7168c7f3bb309f72afd7a240e8490f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec  3 16:16:44 np0005544708 podman[135126]: 2025-12-03 21:16:44.50035761 +0000 UTC m=+0.126588958 container start fa068596cf1b3e98d1bf48b4fc9821277a7168c7f3bb309f72afd7a240e8490f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 16:16:44 np0005544708 podman[135126]: 2025-12-03 21:16:44.503948796 +0000 UTC m=+0.130180214 container attach fa068596cf1b3e98d1bf48b4fc9821277a7168c7f3bb309f72afd7a240e8490f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:16:44 np0005544708 focused_grothendieck[135143]: 167 167
Dec  3 16:16:44 np0005544708 systemd[1]: libpod-fa068596cf1b3e98d1bf48b4fc9821277a7168c7f3bb309f72afd7a240e8490f.scope: Deactivated successfully.
Dec  3 16:16:44 np0005544708 conmon[135143]: conmon fa068596cf1b3e98d1bf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fa068596cf1b3e98d1bf48b4fc9821277a7168c7f3bb309f72afd7a240e8490f.scope/container/memory.events
Dec  3 16:16:44 np0005544708 podman[135126]: 2025-12-03 21:16:44.511105416 +0000 UTC m=+0.137336814 container died fa068596cf1b3e98d1bf48b4fc9821277a7168c7f3bb309f72afd7a240e8490f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 16:16:44 np0005544708 systemd[1]: var-lib-containers-storage-overlay-e2ea9fca907de7d17d2c7bfc173461d3148e460bc9cec11cb3909145316a2a12-merged.mount: Deactivated successfully.
Dec  3 16:16:44 np0005544708 podman[135126]: 2025-12-03 21:16:44.555073809 +0000 UTC m=+0.181305167 container remove fa068596cf1b3e98d1bf48b4fc9821277a7168c7f3bb309f72afd7a240e8490f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:16:44 np0005544708 systemd[1]: libpod-conmon-fa068596cf1b3e98d1bf48b4fc9821277a7168c7f3bb309f72afd7a240e8490f.scope: Deactivated successfully.
Dec  3 16:16:44 np0005544708 python3.9[135113]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:16:44 np0005544708 podman[135191]: 2025-12-03 21:16:44.772011614 +0000 UTC m=+0.058285355 container create d9d23daf99503ac0da35a6da5ab2755848dce16ea8434f5f0fb0cb42fb861d22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 16:16:44 np0005544708 systemd[1]: Started libpod-conmon-d9d23daf99503ac0da35a6da5ab2755848dce16ea8434f5f0fb0cb42fb861d22.scope.
Dec  3 16:16:44 np0005544708 podman[135191]: 2025-12-03 21:16:44.74522988 +0000 UTC m=+0.031503711 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:16:44 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:16:44 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e3402d84e47cb21cfb44561de0c608e58c7b602df9b7129ac7e92ade1e91b3c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:16:44 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e3402d84e47cb21cfb44561de0c608e58c7b602df9b7129ac7e92ade1e91b3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:16:44 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e3402d84e47cb21cfb44561de0c608e58c7b602df9b7129ac7e92ade1e91b3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:16:44 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e3402d84e47cb21cfb44561de0c608e58c7b602df9b7129ac7e92ade1e91b3c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:16:44 np0005544708 podman[135191]: 2025-12-03 21:16:44.866460093 +0000 UTC m=+0.152733924 container init d9d23daf99503ac0da35a6da5ab2755848dce16ea8434f5f0fb0cb42fb861d22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ride, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec  3 16:16:44 np0005544708 podman[135191]: 2025-12-03 21:16:44.879728488 +0000 UTC m=+0.166002239 container start d9d23daf99503ac0da35a6da5ab2755848dce16ea8434f5f0fb0cb42fb861d22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ride, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:16:44 np0005544708 podman[135191]: 2025-12-03 21:16:44.883810896 +0000 UTC m=+0.170084707 container attach d9d23daf99503ac0da35a6da5ab2755848dce16ea8434f5f0fb0cb42fb861d22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]: {
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:    "0": [
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:        {
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:            "devices": [
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "/dev/loop3"
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:            ],
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:            "lv_name": "ceph_lv0",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:            "lv_size": "21470642176",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d33bc95-baf8-481d-bc78-3b15ffd29872,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:            "lv_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:            "name": "ceph_lv0",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:            "tags": {
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.block_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.cluster_name": "ceph",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.crush_device_class": "",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.encrypted": "0",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.objectstore": "bluestore",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.osd_fsid": "4d33bc95-baf8-481d-bc78-3b15ffd29872",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.osd_id": "0",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.type": "block",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.vdo": "0",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.with_tpm": "0"
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:            },
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:            "type": "block",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:            "vg_name": "ceph_vg0"
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:        }
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:    ],
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:    "1": [
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:        {
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:            "devices": [
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "/dev/loop4"
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:            ],
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:            "lv_name": "ceph_lv1",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:            "lv_size": "21470642176",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c4086f1b-ff53-4e63-8dc0-011238d77976,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:            "lv_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:            "name": "ceph_lv1",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:            "tags": {
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.block_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.cluster_name": "ceph",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.crush_device_class": "",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.encrypted": "0",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.objectstore": "bluestore",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.osd_fsid": "c4086f1b-ff53-4e63-8dc0-011238d77976",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.osd_id": "1",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.type": "block",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.vdo": "0",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.with_tpm": "0"
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:            },
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:            "type": "block",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:            "vg_name": "ceph_vg1"
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:        }
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:    ],
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:    "2": [
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:        {
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:            "devices": [
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "/dev/loop5"
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:            ],
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:            "lv_name": "ceph_lv2",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:            "lv_size": "21470642176",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=abcd6a67-9013-4470-978f-f75da5f33cd4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:            "lv_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:            "name": "ceph_lv2",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:            "tags": {
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.block_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.cluster_name": "ceph",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.crush_device_class": "",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.encrypted": "0",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.objectstore": "bluestore",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.osd_fsid": "abcd6a67-9013-4470-978f-f75da5f33cd4",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.osd_id": "2",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.type": "block",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.vdo": "0",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:                "ceph.with_tpm": "0"
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:            },
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:            "type": "block",
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:            "vg_name": "ceph_vg2"
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:        }
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]:    ]
Dec  3 16:16:45 np0005544708 quizzical_ride[135233]: }
Dec  3 16:16:45 np0005544708 systemd[1]: libpod-d9d23daf99503ac0da35a6da5ab2755848dce16ea8434f5f0fb0cb42fb861d22.scope: Deactivated successfully.
Dec  3 16:16:45 np0005544708 podman[135191]: 2025-12-03 21:16:45.239371659 +0000 UTC m=+0.525645410 container died d9d23daf99503ac0da35a6da5ab2755848dce16ea8434f5f0fb0cb42fb861d22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ride, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle)
Dec  3 16:16:45 np0005544708 systemd[1]: var-lib-containers-storage-overlay-5e3402d84e47cb21cfb44561de0c608e58c7b602df9b7129ac7e92ade1e91b3c-merged.mount: Deactivated successfully.
Dec  3 16:16:45 np0005544708 podman[135191]: 2025-12-03 21:16:45.287717299 +0000 UTC m=+0.573991040 container remove d9d23daf99503ac0da35a6da5ab2755848dce16ea8434f5f0fb0cb42fb861d22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ride, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:16:45 np0005544708 systemd[1]: libpod-conmon-d9d23daf99503ac0da35a6da5ab2755848dce16ea8434f5f0fb0cb42fb861d22.scope: Deactivated successfully.
Dec  3 16:16:45 np0005544708 python3.9[135314]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764796603.9000785-217-75560881586217/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:16:45 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v305: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:16:45 np0005544708 podman[135474]: 2025-12-03 21:16:45.736905989 +0000 UTC m=+0.042445233 container create 223906ed88a43097c86978011bdc6b51e67d747c3664f2ea4de10a045019cbfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec  3 16:16:45 np0005544708 systemd[1]: Started libpod-conmon-223906ed88a43097c86978011bdc6b51e67d747c3664f2ea4de10a045019cbfe.scope.
Dec  3 16:16:45 np0005544708 podman[135474]: 2025-12-03 21:16:45.722160916 +0000 UTC m=+0.027700190 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:16:45 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:16:45 np0005544708 podman[135474]: 2025-12-03 21:16:45.887896606 +0000 UTC m=+0.193435950 container init 223906ed88a43097c86978011bdc6b51e67d747c3664f2ea4de10a045019cbfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_fermi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec  3 16:16:45 np0005544708 podman[135474]: 2025-12-03 21:16:45.899954447 +0000 UTC m=+0.205493711 container start 223906ed88a43097c86978011bdc6b51e67d747c3664f2ea4de10a045019cbfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_fermi, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:16:45 np0005544708 podman[135474]: 2025-12-03 21:16:45.903332647 +0000 UTC m=+0.208871911 container attach 223906ed88a43097c86978011bdc6b51e67d747c3664f2ea4de10a045019cbfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_fermi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec  3 16:16:45 np0005544708 beautiful_fermi[135528]: 167 167
Dec  3 16:16:45 np0005544708 systemd[1]: libpod-223906ed88a43097c86978011bdc6b51e67d747c3664f2ea4de10a045019cbfe.scope: Deactivated successfully.
Dec  3 16:16:45 np0005544708 podman[135474]: 2025-12-03 21:16:45.907117448 +0000 UTC m=+0.212656742 container died 223906ed88a43097c86978011bdc6b51e67d747c3664f2ea4de10a045019cbfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_fermi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec  3 16:16:45 np0005544708 systemd[1]: var-lib-containers-storage-overlay-ae6764862be8fbaecb4bb4d1ee582e5932338384d624f418b02c7bee793361e7-merged.mount: Deactivated successfully.
Dec  3 16:16:45 np0005544708 podman[135474]: 2025-12-03 21:16:45.957730198 +0000 UTC m=+0.263269482 container remove 223906ed88a43097c86978011bdc6b51e67d747c3664f2ea4de10a045019cbfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_fermi, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:16:45 np0005544708 systemd[1]: libpod-conmon-223906ed88a43097c86978011bdc6b51e67d747c3664f2ea4de10a045019cbfe.scope: Deactivated successfully.
Dec  3 16:16:46 np0005544708 python3.9[135573]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:16:46 np0005544708 podman[135581]: 2025-12-03 21:16:46.192191801 +0000 UTC m=+0.067859311 container create 8156f29cea77848083a27a8bd92a403d597f7b97519e4480006956713a5d2b4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_driscoll, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec  3 16:16:46 np0005544708 systemd[1]: Started libpod-conmon-8156f29cea77848083a27a8bd92a403d597f7b97519e4480006956713a5d2b4f.scope.
Dec  3 16:16:46 np0005544708 podman[135581]: 2025-12-03 21:16:46.166505296 +0000 UTC m=+0.042172896 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:16:46 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:16:46 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c4242e3905bdc3436151a07c22ca297b83ef7169ac4ef1e61279222871b5fc2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:16:46 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c4242e3905bdc3436151a07c22ca297b83ef7169ac4ef1e61279222871b5fc2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:16:46 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c4242e3905bdc3436151a07c22ca297b83ef7169ac4ef1e61279222871b5fc2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:16:46 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c4242e3905bdc3436151a07c22ca297b83ef7169ac4ef1e61279222871b5fc2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:16:46 np0005544708 podman[135581]: 2025-12-03 21:16:46.278557594 +0000 UTC m=+0.154225124 container init 8156f29cea77848083a27a8bd92a403d597f7b97519e4480006956713a5d2b4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_driscoll, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:16:46 np0005544708 podman[135581]: 2025-12-03 21:16:46.2870055 +0000 UTC m=+0.162673000 container start 8156f29cea77848083a27a8bd92a403d597f7b97519e4480006956713a5d2b4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_driscoll, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True)
Dec  3 16:16:46 np0005544708 podman[135581]: 2025-12-03 21:16:46.29002637 +0000 UTC m=+0.165693880 container attach 8156f29cea77848083a27a8bd92a403d597f7b97519e4480006956713a5d2b4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:16:46 np0005544708 python3.9[135780]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:16:46 np0005544708 lvm[135830]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:16:46 np0005544708 lvm[135829]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:16:46 np0005544708 lvm[135829]: VG ceph_vg0 finished
Dec  3 16:16:46 np0005544708 lvm[135830]: VG ceph_vg1 finished
Dec  3 16:16:46 np0005544708 lvm[135837]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:16:46 np0005544708 lvm[135837]: VG ceph_vg2 finished
Dec  3 16:16:47 np0005544708 busy_driscoll[135599]: {}
Dec  3 16:16:47 np0005544708 systemd[1]: libpod-8156f29cea77848083a27a8bd92a403d597f7b97519e4480006956713a5d2b4f.scope: Deactivated successfully.
Dec  3 16:16:47 np0005544708 systemd[1]: libpod-8156f29cea77848083a27a8bd92a403d597f7b97519e4480006956713a5d2b4f.scope: Consumed 1.270s CPU time.
Dec  3 16:16:47 np0005544708 podman[135581]: 2025-12-03 21:16:47.125301058 +0000 UTC m=+1.000968618 container died 8156f29cea77848083a27a8bd92a403d597f7b97519e4480006956713a5d2b4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_driscoll, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:16:47 np0005544708 systemd[1]: var-lib-containers-storage-overlay-3c4242e3905bdc3436151a07c22ca297b83ef7169ac4ef1e61279222871b5fc2-merged.mount: Deactivated successfully.
Dec  3 16:16:47 np0005544708 podman[135581]: 2025-12-03 21:16:47.180047577 +0000 UTC m=+1.055715097 container remove 8156f29cea77848083a27a8bd92a403d597f7b97519e4480006956713a5d2b4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_driscoll, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:16:47 np0005544708 systemd[1]: libpod-conmon-8156f29cea77848083a27a8bd92a403d597f7b97519e4480006956713a5d2b4f.scope: Deactivated successfully.
Dec  3 16:16:47 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:16:47 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:16:47 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:16:47 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:16:47 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v306: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:16:47 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:16:47 np0005544708 python3.9[136023]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:16:48 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:16:48 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:16:48 np0005544708 python3.9[136175]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:16:49 np0005544708 python3.9[136328]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:16:49 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v307: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:16:49 np0005544708 python3.9[136482]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:16:50 np0005544708 python3.9[136637]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:16:51 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v308: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:16:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:16:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:16:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:16:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:16:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:16:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:16:51 np0005544708 python3.9[136787]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 16:16:52 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:16:53 np0005544708 python3.9[136940]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:f2:93:49:d5" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:16:53 np0005544708 ovs-vsctl[136941]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:f2:93:49:d5 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Dec  3 16:16:53 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v309: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:16:53 np0005544708 python3.9[137093]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:16:54 np0005544708 python3.9[137248]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:16:54 np0005544708 ovs-vsctl[137249]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Dec  3 16:16:55 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v310: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:16:55 np0005544708 python3.9[137399]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:16:56 np0005544708 python3.9[137553]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:16:56 np0005544708 python3.9[137705]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:16:57 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v311: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:16:57 np0005544708 python3.9[137783]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:16:57 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:16:58 np0005544708 python3.9[137935]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:16:58 np0005544708 python3.9[138013]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:16:59 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v312: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:16:59 np0005544708 python3.9[138165]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:17:00 np0005544708 python3.9[138317]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:17:00 np0005544708 python3.9[138395]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:17:01 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v313: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:17:01 np0005544708 python3.9[138547]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:17:01 np0005544708 python3.9[138625]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:17:02 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:17:02 np0005544708 python3.9[138777]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:17:02 np0005544708 systemd[1]: Reloading.
Dec  3 16:17:02 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:17:02 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:17:03 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v314: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:17:03 np0005544708 python3.9[138966]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:17:04 np0005544708 python3.9[139044]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:17:04 np0005544708 python3.9[139196]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:17:05 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v315: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:17:05 np0005544708 python3.9[139274]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:17:06 np0005544708 python3.9[139426]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:17:06 np0005544708 systemd[1]: Reloading.
Dec  3 16:17:06 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:17:06 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:17:06 np0005544708 systemd[1]: Starting Create netns directory...
Dec  3 16:17:06 np0005544708 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  3 16:17:06 np0005544708 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  3 16:17:06 np0005544708 systemd[1]: Finished Create netns directory.
Dec  3 16:17:07 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v316: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:17:07 np0005544708 python3.9[139618]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:17:07 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:17:08 np0005544708 python3.9[139770]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:17:08 np0005544708 python3.9[139893]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764796627.6422122-468-85157909291054/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:17:09 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v317: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:17:09 np0005544708 python3.9[140045]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:17:10 np0005544708 python3.9[140197]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:17:11 np0005544708 python3.9[140320]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764796629.9966214-493-211087586940622/.source.json _original_basename=.yibc8jrh follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:17:11 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v318: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:17:12 np0005544708 python3.9[140472]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:17:12 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:17:12 np0005544708 ceph-mon[75204]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Dec  3 16:17:12 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:17:12.621330) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 16:17:12 np0005544708 ceph-mon[75204]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Dec  3 16:17:12 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796632621376, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 791, "num_deletes": 251, "total_data_size": 718346, "memory_usage": 733872, "flush_reason": "Manual Compaction"}
Dec  3 16:17:12 np0005544708 ceph-mon[75204]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Dec  3 16:17:12 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796632629612, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 458058, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6766, "largest_seqno": 7556, "table_properties": {"data_size": 454744, "index_size": 1158, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8359, "raw_average_key_size": 19, "raw_value_size": 447741, "raw_average_value_size": 1043, "num_data_blocks": 54, "num_entries": 429, "num_filter_entries": 429, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764796561, "oldest_key_time": 1764796561, "file_creation_time": 1764796632, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d83d641b-0db7-44b5-9540-349f4c36f664", "db_session_id": "YRQHTOJ9E78VAMDNI6U1", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Dec  3 16:17:12 np0005544708 ceph-mon[75204]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 8352 microseconds, and 3558 cpu microseconds.
Dec  3 16:17:12 np0005544708 ceph-mon[75204]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 16:17:12 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:17:12.629681) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 458058 bytes OK
Dec  3 16:17:12 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:17:12.629711) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Dec  3 16:17:12 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:17:12.631256) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Dec  3 16:17:12 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:17:12.631280) EVENT_LOG_v1 {"time_micros": 1764796632631273, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 16:17:12 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:17:12.631303) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 16:17:12 np0005544708 ceph-mon[75204]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 714382, prev total WAL file size 714382, number of live WAL files 2.
Dec  3 16:17:12 np0005544708 ceph-mon[75204]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 16:17:12 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:17:12.632065) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Dec  3 16:17:12 np0005544708 ceph-mon[75204]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 16:17:12 np0005544708 ceph-mon[75204]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(447KB)], [20(5603KB)]
Dec  3 16:17:12 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796632632162, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 6196198, "oldest_snapshot_seqno": -1}
Dec  3 16:17:12 np0005544708 ceph-mon[75204]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 2603 keys, 4497454 bytes, temperature: kUnknown
Dec  3 16:17:12 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796632677641, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 4497454, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4478003, "index_size": 11854, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6533, "raw_key_size": 60429, "raw_average_key_size": 23, "raw_value_size": 4429259, "raw_average_value_size": 1701, "num_data_blocks": 537, "num_entries": 2603, "num_filter_entries": 2603, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764796079, "oldest_key_time": 0, "file_creation_time": 1764796632, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d83d641b-0db7-44b5-9540-349f4c36f664", "db_session_id": "YRQHTOJ9E78VAMDNI6U1", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Dec  3 16:17:12 np0005544708 ceph-mon[75204]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 16:17:12 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:17:12.678030) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 4497454 bytes
Dec  3 16:17:12 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:17:12.679615) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.9 rd, 98.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 5.5 +0.0 blob) out(4.3 +0.0 blob), read-write-amplify(23.3) write-amplify(9.8) OK, records in: 3087, records dropped: 484 output_compression: NoCompression
Dec  3 16:17:12 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:17:12.679656) EVENT_LOG_v1 {"time_micros": 1764796632679638, "job": 6, "event": "compaction_finished", "compaction_time_micros": 45608, "compaction_time_cpu_micros": 23751, "output_level": 6, "num_output_files": 1, "total_output_size": 4497454, "num_input_records": 3087, "num_output_records": 2603, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 16:17:12 np0005544708 ceph-mon[75204]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 16:17:12 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796632679959, "job": 6, "event": "table_file_deletion", "file_number": 22}
Dec  3 16:17:12 np0005544708 ceph-mon[75204]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 16:17:12 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796632681951, "job": 6, "event": "table_file_deletion", "file_number": 20}
Dec  3 16:17:12 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:17:12.631974) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:17:12 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:17:12.682045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:17:12 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:17:12.682053) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:17:12 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:17:12.682056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:17:12 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:17:12.682059) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:17:12 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:17:12.682063) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:17:13 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v319: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:17:15 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v320: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:17:15 np0005544708 python3.9[140899]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Dec  3 16:17:16 np0005544708 python3.9[141051]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  3 16:17:17 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v321: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:17:17 np0005544708 python3.9[141203]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  3 16:17:17 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:17:19 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v322: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:17:19 np0005544708 python3[141382]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  3 16:17:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Optimize plan auto_2025-12-03_21:17:21
Dec  3 16:17:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 16:17:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] do_upmap
Dec  3 16:17:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'volumes', 'images', 'vms']
Dec  3 16:17:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] prepared 0/10 upmap changes
Dec  3 16:17:21 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v323: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:17:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:17:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:17:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:17:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:17:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:17:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:17:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 16:17:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 16:17:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:17:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:17:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:17:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:17:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:17:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:17:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:17:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:17:22 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:17:23 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v324: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:17:24 np0005544708 podman[141397]: 2025-12-03 21:17:24.347153619 +0000 UTC m=+4.814015001 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec  3 16:17:24 np0005544708 podman[141522]: 2025-12-03 21:17:24.524089919 +0000 UTC m=+0.074949760 container create eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, container_name=ovn_controller)
Dec  3 16:17:24 np0005544708 podman[141522]: 2025-12-03 21:17:24.488080819 +0000 UTC m=+0.038940750 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec  3 16:17:24 np0005544708 python3[141382]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec  3 16:17:25 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v325: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:17:25 np0005544708 python3.9[141712]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:17:26 np0005544708 python3.9[141866]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:17:27 np0005544708 python3.9[141942]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:17:27 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v326: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:17:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 16:17:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:17:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 16:17:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:17:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:17:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:17:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:17:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:17:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:17:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:17:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:17:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:17:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.7344613959254126e-06 of space, bias 4.0, pg target 0.002081353675110495 quantized to 16 (current 16)
Dec  3 16:17:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:17:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:17:27 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:17:28 np0005544708 python3.9[142093]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764796647.1119301-581-48494299229988/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:17:28 np0005544708 python3.9[142171]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  3 16:17:28 np0005544708 systemd[1]: Reloading.
Dec  3 16:17:28 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:17:28 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:17:29 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v327: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:17:29 np0005544708 python3.9[142283]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:17:29 np0005544708 systemd[1]: Reloading.
Dec  3 16:17:30 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:17:30 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:17:30 np0005544708 systemd[1]: Starting ovn_controller container...
Dec  3 16:17:30 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:17:30 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0939388a2c0da322f96a12ab260619ad633970d69fb83fae8368a9d8e503a2c6/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec  3 16:17:30 np0005544708 systemd[1]: Started /usr/bin/podman healthcheck run eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b.
Dec  3 16:17:30 np0005544708 podman[142325]: 2025-12-03 21:17:30.393315362 +0000 UTC m=+0.165336790 container init eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 16:17:30 np0005544708 ovn_controller[142340]: + sudo -E kolla_set_configs
Dec  3 16:17:30 np0005544708 podman[142325]: 2025-12-03 21:17:30.423067915 +0000 UTC m=+0.195089353 container start eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:17:30 np0005544708 edpm-start-podman-container[142325]: ovn_controller
Dec  3 16:17:30 np0005544708 systemd[1]: Created slice User Slice of UID 0.
Dec  3 16:17:30 np0005544708 systemd[1]: Starting User Runtime Directory /run/user/0...
Dec  3 16:17:30 np0005544708 systemd[1]: Finished User Runtime Directory /run/user/0.
Dec  3 16:17:30 np0005544708 systemd[1]: Starting User Manager for UID 0...
Dec  3 16:17:30 np0005544708 edpm-start-podman-container[142324]: Creating additional drop-in dependency for "ovn_controller" (eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b)
Dec  3 16:17:30 np0005544708 podman[142347]: 2025-12-03 21:17:30.542332686 +0000 UTC m=+0.100139132 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  3 16:17:30 np0005544708 systemd[1]: eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b-1cfe6a643379f197.service: Main process exited, code=exited, status=1/FAILURE
Dec  3 16:17:30 np0005544708 systemd[1]: eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b-1cfe6a643379f197.service: Failed with result 'exit-code'.
Dec  3 16:17:30 np0005544708 systemd[1]: Reloading.
Dec  3 16:17:30 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:17:30 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:17:30 np0005544708 systemd[142376]: Queued start job for default target Main User Target.
Dec  3 16:17:30 np0005544708 systemd[142376]: Created slice User Application Slice.
Dec  3 16:17:30 np0005544708 systemd[142376]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec  3 16:17:30 np0005544708 systemd[142376]: Started Daily Cleanup of User's Temporary Directories.
Dec  3 16:17:30 np0005544708 systemd[142376]: Reached target Paths.
Dec  3 16:17:30 np0005544708 systemd[142376]: Reached target Timers.
Dec  3 16:17:30 np0005544708 systemd[142376]: Starting D-Bus User Message Bus Socket...
Dec  3 16:17:30 np0005544708 systemd[142376]: Starting Create User's Volatile Files and Directories...
Dec  3 16:17:30 np0005544708 systemd[142376]: Finished Create User's Volatile Files and Directories.
Dec  3 16:17:30 np0005544708 systemd[142376]: Listening on D-Bus User Message Bus Socket.
Dec  3 16:17:30 np0005544708 systemd[142376]: Reached target Sockets.
Dec  3 16:17:30 np0005544708 systemd[142376]: Reached target Basic System.
Dec  3 16:17:30 np0005544708 systemd[142376]: Reached target Main User Target.
Dec  3 16:17:30 np0005544708 systemd[142376]: Startup finished in 191ms.
Dec  3 16:17:30 np0005544708 systemd[1]: Started User Manager for UID 0.
Dec  3 16:17:30 np0005544708 systemd[1]: Started ovn_controller container.
Dec  3 16:17:30 np0005544708 systemd[1]: Started Session c1 of User root.
Dec  3 16:17:30 np0005544708 ovn_controller[142340]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  3 16:17:30 np0005544708 ovn_controller[142340]: INFO:__main__:Validating config file
Dec  3 16:17:30 np0005544708 ovn_controller[142340]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  3 16:17:30 np0005544708 ovn_controller[142340]: INFO:__main__:Writing out command to execute
Dec  3 16:17:30 np0005544708 systemd[1]: session-c1.scope: Deactivated successfully.
Dec  3 16:17:30 np0005544708 ovn_controller[142340]: ++ cat /run_command
Dec  3 16:17:30 np0005544708 ovn_controller[142340]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec  3 16:17:30 np0005544708 ovn_controller[142340]: + ARGS=
Dec  3 16:17:30 np0005544708 ovn_controller[142340]: + sudo kolla_copy_cacerts
Dec  3 16:17:31 np0005544708 systemd[1]: Started Session c2 of User root.
Dec  3 16:17:31 np0005544708 systemd[1]: session-c2.scope: Deactivated successfully.
Dec  3 16:17:31 np0005544708 ovn_controller[142340]: + [[ ! -n '' ]]
Dec  3 16:17:31 np0005544708 ovn_controller[142340]: + . kolla_extend_start
Dec  3 16:17:31 np0005544708 ovn_controller[142340]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec  3 16:17:31 np0005544708 ovn_controller[142340]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Dec  3 16:17:31 np0005544708 ovn_controller[142340]: + umask 0022
Dec  3 16:17:31 np0005544708 ovn_controller[142340]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Dec  3 16:17:31 np0005544708 ovn_controller[142340]: 2025-12-03T21:17:31Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec  3 16:17:31 np0005544708 ovn_controller[142340]: 2025-12-03T21:17:31Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec  3 16:17:31 np0005544708 ovn_controller[142340]: 2025-12-03T21:17:31Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Dec  3 16:17:31 np0005544708 ovn_controller[142340]: 2025-12-03T21:17:31Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Dec  3 16:17:31 np0005544708 ovn_controller[142340]: 2025-12-03T21:17:31Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec  3 16:17:31 np0005544708 ovn_controller[142340]: 2025-12-03T21:17:31Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Dec  3 16:17:31 np0005544708 NetworkManager[48996]: <info>  [1764796651.0960] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Dec  3 16:17:31 np0005544708 NetworkManager[48996]: <info>  [1764796651.0967] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  3 16:17:31 np0005544708 NetworkManager[48996]: <info>  [1764796651.0978] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Dec  3 16:17:31 np0005544708 NetworkManager[48996]: <info>  [1764796651.0984] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Dec  3 16:17:31 np0005544708 NetworkManager[48996]: <info>  [1764796651.0987] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec  3 16:17:31 np0005544708 ovn_controller[142340]: 2025-12-03T21:17:31Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec  3 16:17:31 np0005544708 kernel: br-int: entered promiscuous mode
Dec  3 16:17:31 np0005544708 ovn_controller[142340]: 2025-12-03T21:17:31Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  3 16:17:31 np0005544708 ovn_controller[142340]: 2025-12-03T21:17:31Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  3 16:17:31 np0005544708 ovn_controller[142340]: 2025-12-03T21:17:31Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Dec  3 16:17:31 np0005544708 ovn_controller[142340]: 2025-12-03T21:17:31Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Dec  3 16:17:31 np0005544708 ovn_controller[142340]: 2025-12-03T21:17:31Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Dec  3 16:17:31 np0005544708 ovn_controller[142340]: 2025-12-03T21:17:31Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec  3 16:17:31 np0005544708 ovn_controller[142340]: 2025-12-03T21:17:31Z|00014|main|INFO|OVS feature set changed, force recompute.
Dec  3 16:17:31 np0005544708 ovn_controller[142340]: 2025-12-03T21:17:31Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  3 16:17:31 np0005544708 ovn_controller[142340]: 2025-12-03T21:17:31Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  3 16:17:31 np0005544708 ovn_controller[142340]: 2025-12-03T21:17:31Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  3 16:17:31 np0005544708 ovn_controller[142340]: 2025-12-03T21:17:31Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec  3 16:17:31 np0005544708 ovn_controller[142340]: 2025-12-03T21:17:31Z|00019|main|INFO|OVS feature set changed, force recompute.
Dec  3 16:17:31 np0005544708 ovn_controller[142340]: 2025-12-03T21:17:31Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  3 16:17:31 np0005544708 ovn_controller[142340]: 2025-12-03T21:17:31Z|00021|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Dec  3 16:17:31 np0005544708 ovn_controller[142340]: 2025-12-03T21:17:31Z|00022|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Dec  3 16:17:31 np0005544708 ovn_controller[142340]: 2025-12-03T21:17:31Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Dec  3 16:17:31 np0005544708 ovn_controller[142340]: 2025-12-03T21:17:31Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Dec  3 16:17:31 np0005544708 ovn_controller[142340]: 2025-12-03T21:17:31Z|00001|statctrl(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  3 16:17:31 np0005544708 ovn_controller[142340]: 2025-12-03T21:17:31Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  3 16:17:31 np0005544708 ovn_controller[142340]: 2025-12-03T21:17:31Z|00002|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  3 16:17:31 np0005544708 ovn_controller[142340]: 2025-12-03T21:17:31Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  3 16:17:31 np0005544708 ovn_controller[142340]: 2025-12-03T21:17:31Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  3 16:17:31 np0005544708 ovn_controller[142340]: 2025-12-03T21:17:31Z|00003|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  3 16:17:31 np0005544708 NetworkManager[48996]: <info>  [1764796651.1248] manager: (ovn-5d60bc-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Dec  3 16:17:31 np0005544708 systemd-udevd[142471]: Network interface NamePolicy= disabled on kernel command line.
Dec  3 16:17:31 np0005544708 kernel: genev_sys_6081: entered promiscuous mode
Dec  3 16:17:31 np0005544708 NetworkManager[48996]: <info>  [1764796651.1574] device (genev_sys_6081): carrier: link connected
Dec  3 16:17:31 np0005544708 NetworkManager[48996]: <info>  [1764796651.1579] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Dec  3 16:17:31 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v328: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:17:31 np0005544708 python3.9[142602]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:17:31 np0005544708 ovs-vsctl[142603]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Dec  3 16:17:32 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:17:32 np0005544708 python3.9[142755]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:17:32 np0005544708 ovs-vsctl[142757]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Dec  3 16:17:33 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v329: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:17:33 np0005544708 python3.9[142910]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:17:33 np0005544708 ovs-vsctl[142911]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Dec  3 16:17:34 np0005544708 systemd[1]: session-45.scope: Deactivated successfully.
Dec  3 16:17:34 np0005544708 systemd[1]: session-45.scope: Consumed 1min 4.123s CPU time.
Dec  3 16:17:34 np0005544708 systemd-logind[787]: Session 45 logged out. Waiting for processes to exit.
Dec  3 16:17:34 np0005544708 systemd-logind[787]: Removed session 45.
Dec  3 16:17:35 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v330: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:17:37 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v331: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:17:37 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:17:39 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v332: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:17:39 np0005544708 systemd-logind[787]: New session 47 of user zuul.
Dec  3 16:17:39 np0005544708 systemd[1]: Started Session 47 of User zuul.
Dec  3 16:17:40 np0005544708 python3.9[143089]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 16:17:41 np0005544708 systemd[1]: Stopping User Manager for UID 0...
Dec  3 16:17:41 np0005544708 systemd[142376]: Activating special unit Exit the Session...
Dec  3 16:17:41 np0005544708 systemd[142376]: Stopped target Main User Target.
Dec  3 16:17:41 np0005544708 systemd[142376]: Stopped target Basic System.
Dec  3 16:17:41 np0005544708 systemd[142376]: Stopped target Paths.
Dec  3 16:17:41 np0005544708 systemd[142376]: Stopped target Sockets.
Dec  3 16:17:41 np0005544708 systemd[142376]: Stopped target Timers.
Dec  3 16:17:41 np0005544708 systemd[142376]: Stopped Daily Cleanup of User's Temporary Directories.
Dec  3 16:17:41 np0005544708 systemd[142376]: Closed D-Bus User Message Bus Socket.
Dec  3 16:17:41 np0005544708 systemd[142376]: Stopped Create User's Volatile Files and Directories.
Dec  3 16:17:41 np0005544708 systemd[142376]: Removed slice User Application Slice.
Dec  3 16:17:41 np0005544708 systemd[142376]: Reached target Shutdown.
Dec  3 16:17:41 np0005544708 systemd[142376]: Finished Exit the Session.
Dec  3 16:17:41 np0005544708 systemd[142376]: Reached target Exit the Session.
Dec  3 16:17:41 np0005544708 systemd[1]: user@0.service: Deactivated successfully.
Dec  3 16:17:41 np0005544708 systemd[1]: Stopped User Manager for UID 0.
Dec  3 16:17:41 np0005544708 systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec  3 16:17:41 np0005544708 systemd[1]: run-user-0.mount: Deactivated successfully.
Dec  3 16:17:41 np0005544708 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec  3 16:17:41 np0005544708 systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec  3 16:17:41 np0005544708 systemd[1]: Removed slice User Slice of UID 0.
Dec  3 16:17:41 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v333: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:17:41 np0005544708 python3.9[143246]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:17:42 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:17:42 np0005544708 python3.9[143398]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:17:43 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v334: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:17:43 np0005544708 python3.9[143550]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:17:44 np0005544708 python3.9[143702]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:17:45 np0005544708 python3.9[143854]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:17:45 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v335: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:17:46 np0005544708 python3.9[144004]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 16:17:46 np0005544708 python3.9[144156]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec  3 16:17:47 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v336: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:17:47 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:17:48 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:17:48 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:17:48 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  3 16:17:48 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:17:48 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  3 16:17:48 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:17:48 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  3 16:17:48 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  3 16:17:48 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  3 16:17:48 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:17:48 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:17:48 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:17:48 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:17:48 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:17:48 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:17:48 np0005544708 python3.9[144413]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:17:48 np0005544708 podman[144489]: 2025-12-03 21:17:48.801522837 +0000 UTC m=+0.067278070 container create 8f8a02e3e582f537a2564613bd5c1687d28e3735f2c59f79356fa771275f2325 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_albattani, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec  3 16:17:48 np0005544708 systemd[1]: Started libpod-conmon-8f8a02e3e582f537a2564613bd5c1687d28e3735f2c59f79356fa771275f2325.scope.
Dec  3 16:17:48 np0005544708 podman[144489]: 2025-12-03 21:17:48.777346864 +0000 UTC m=+0.043102177 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:17:48 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:17:48 np0005544708 podman[144489]: 2025-12-03 21:17:48.895996158 +0000 UTC m=+0.161751421 container init 8f8a02e3e582f537a2564613bd5c1687d28e3735f2c59f79356fa771275f2325 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_albattani, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec  3 16:17:48 np0005544708 podman[144489]: 2025-12-03 21:17:48.902943512 +0000 UTC m=+0.168698775 container start 8f8a02e3e582f537a2564613bd5c1687d28e3735f2c59f79356fa771275f2325 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_albattani, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 16:17:48 np0005544708 podman[144489]: 2025-12-03 21:17:48.907948096 +0000 UTC m=+0.173703369 container attach 8f8a02e3e582f537a2564613bd5c1687d28e3735f2c59f79356fa771275f2325 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_albattani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 16:17:48 np0005544708 pensive_albattani[144513]: 167 167
Dec  3 16:17:48 np0005544708 systemd[1]: libpod-8f8a02e3e582f537a2564613bd5c1687d28e3735f2c59f79356fa771275f2325.scope: Deactivated successfully.
Dec  3 16:17:48 np0005544708 podman[144489]: 2025-12-03 21:17:48.909818596 +0000 UTC m=+0.175573909 container died 8f8a02e3e582f537a2564613bd5c1687d28e3735f2c59f79356fa771275f2325 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_albattani, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:17:48 np0005544708 systemd[1]: var-lib-containers-storage-overlay-fef0a4f41563aa276d01f1e778752122b9c081eef67080297386070307aea73f-merged.mount: Deactivated successfully.
Dec  3 16:17:48 np0005544708 podman[144489]: 2025-12-03 21:17:48.967762995 +0000 UTC m=+0.233518228 container remove 8f8a02e3e582f537a2564613bd5c1687d28e3735f2c59f79356fa771275f2325 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle)
Dec  3 16:17:48 np0005544708 systemd[1]: libpod-conmon-8f8a02e3e582f537a2564613bd5c1687d28e3735f2c59f79356fa771275f2325.scope: Deactivated successfully.
Dec  3 16:17:49 np0005544708 podman[144594]: 2025-12-03 21:17:49.157059978 +0000 UTC m=+0.047951786 container create b051a431daf0e05e17425c2f7c7291b76f27a27baab9cff7daf82dad39c15e9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:17:49 np0005544708 systemd[1]: Started libpod-conmon-b051a431daf0e05e17425c2f7c7291b76f27a27baab9cff7daf82dad39c15e9a.scope.
Dec  3 16:17:49 np0005544708 podman[144594]: 2025-12-03 21:17:49.134983941 +0000 UTC m=+0.025875789 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:17:49 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:17:49 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714e1a4b190c59c0ff1a61e5cfce80e88c2588ab707c717df3e533ed7c7e530b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:17:49 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714e1a4b190c59c0ff1a61e5cfce80e88c2588ab707c717df3e533ed7c7e530b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:17:49 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714e1a4b190c59c0ff1a61e5cfce80e88c2588ab707c717df3e533ed7c7e530b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:17:49 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714e1a4b190c59c0ff1a61e5cfce80e88c2588ab707c717df3e533ed7c7e530b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:17:49 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714e1a4b190c59c0ff1a61e5cfce80e88c2588ab707c717df3e533ed7c7e530b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:17:49 np0005544708 podman[144594]: 2025-12-03 21:17:49.245105538 +0000 UTC m=+0.135997356 container init b051a431daf0e05e17425c2f7c7291b76f27a27baab9cff7daf82dad39c15e9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_cohen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:17:49 np0005544708 podman[144594]: 2025-12-03 21:17:49.254476757 +0000 UTC m=+0.145368545 container start b051a431daf0e05e17425c2f7c7291b76f27a27baab9cff7daf82dad39c15e9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec  3 16:17:49 np0005544708 podman[144594]: 2025-12-03 21:17:49.257836347 +0000 UTC m=+0.148728165 container attach b051a431daf0e05e17425c2f7c7291b76f27a27baab9cff7daf82dad39c15e9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_cohen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec  3 16:17:49 np0005544708 python3.9[144623]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764796667.8076274-86-262003181870701/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:17:49 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v337: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:17:49 np0005544708 gallant_cohen[144631]: --> passed data devices: 0 physical, 3 LVM
Dec  3 16:17:49 np0005544708 gallant_cohen[144631]: --> All data devices are unavailable
Dec  3 16:17:49 np0005544708 systemd[1]: libpod-b051a431daf0e05e17425c2f7c7291b76f27a27baab9cff7daf82dad39c15e9a.scope: Deactivated successfully.
Dec  3 16:17:49 np0005544708 podman[144594]: 2025-12-03 21:17:49.828717992 +0000 UTC m=+0.719609810 container died b051a431daf0e05e17425c2f7c7291b76f27a27baab9cff7daf82dad39c15e9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:17:49 np0005544708 systemd[1]: var-lib-containers-storage-overlay-714e1a4b190c59c0ff1a61e5cfce80e88c2588ab707c717df3e533ed7c7e530b-merged.mount: Deactivated successfully.
Dec  3 16:17:49 np0005544708 podman[144594]: 2025-12-03 21:17:49.881164175 +0000 UTC m=+0.772055993 container remove b051a431daf0e05e17425c2f7c7291b76f27a27baab9cff7daf82dad39c15e9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:17:49 np0005544708 systemd[1]: libpod-conmon-b051a431daf0e05e17425c2f7c7291b76f27a27baab9cff7daf82dad39c15e9a.scope: Deactivated successfully.
Dec  3 16:17:49 np0005544708 python3.9[144800]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:17:50 np0005544708 podman[144944]: 2025-12-03 21:17:50.358755901 +0000 UTC m=+0.034145438 container create 3fcf985f72f222a3906ab521b0a15dff305e9c131fd1b8bfa25dc16796cf4bef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:17:50 np0005544708 systemd[1]: Started libpod-conmon-3fcf985f72f222a3906ab521b0a15dff305e9c131fd1b8bfa25dc16796cf4bef.scope.
Dec  3 16:17:50 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:17:50 np0005544708 podman[144944]: 2025-12-03 21:17:50.343920806 +0000 UTC m=+0.019310363 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:17:50 np0005544708 podman[144944]: 2025-12-03 21:17:50.449215576 +0000 UTC m=+0.124605183 container init 3fcf985f72f222a3906ab521b0a15dff305e9c131fd1b8bfa25dc16796cf4bef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  3 16:17:50 np0005544708 podman[144944]: 2025-12-03 21:17:50.46143575 +0000 UTC m=+0.136825287 container start 3fcf985f72f222a3906ab521b0a15dff305e9c131fd1b8bfa25dc16796cf4bef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  3 16:17:50 np0005544708 focused_mirzakhani[145000]: 167 167
Dec  3 16:17:50 np0005544708 systemd[1]: libpod-3fcf985f72f222a3906ab521b0a15dff305e9c131fd1b8bfa25dc16796cf4bef.scope: Deactivated successfully.
Dec  3 16:17:50 np0005544708 podman[144944]: 2025-12-03 21:17:50.466847064 +0000 UTC m=+0.142236621 container attach 3fcf985f72f222a3906ab521b0a15dff305e9c131fd1b8bfa25dc16796cf4bef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mirzakhani, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 16:17:50 np0005544708 podman[144944]: 2025-12-03 21:17:50.468278253 +0000 UTC m=+0.143667810 container died 3fcf985f72f222a3906ab521b0a15dff305e9c131fd1b8bfa25dc16796cf4bef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mirzakhani, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  3 16:17:50 np0005544708 systemd[1]: var-lib-containers-storage-overlay-4de3a05f17e46a6019de05efe5e5a266cd35852ea5373c116eedc845c7d58535-merged.mount: Deactivated successfully.
Dec  3 16:17:50 np0005544708 podman[144944]: 2025-12-03 21:17:50.504607608 +0000 UTC m=+0.179997135 container remove 3fcf985f72f222a3906ab521b0a15dff305e9c131fd1b8bfa25dc16796cf4bef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:17:50 np0005544708 systemd[1]: libpod-conmon-3fcf985f72f222a3906ab521b0a15dff305e9c131fd1b8bfa25dc16796cf4bef.scope: Deactivated successfully.
Dec  3 16:17:50 np0005544708 python3.9[145013]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764796669.5445263-101-62394651386676/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:17:50 np0005544708 podman[145036]: 2025-12-03 21:17:50.71835825 +0000 UTC m=+0.068353878 container create 4e4ac3c03ab60fe859ff8aef3e8419d5674db81474fc7e7ec6a8be62e7f70676 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_bell, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:17:50 np0005544708 systemd[1]: Started libpod-conmon-4e4ac3c03ab60fe859ff8aef3e8419d5674db81474fc7e7ec6a8be62e7f70676.scope.
Dec  3 16:17:50 np0005544708 podman[145036]: 2025-12-03 21:17:50.693390146 +0000 UTC m=+0.043385814 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:17:50 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:17:50 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4da9b94c860a9d8d8d135f9f5b7ef7e7cb1351f6c09ab93213c7f9313477770d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:17:50 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4da9b94c860a9d8d8d135f9f5b7ef7e7cb1351f6c09ab93213c7f9313477770d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:17:50 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4da9b94c860a9d8d8d135f9f5b7ef7e7cb1351f6c09ab93213c7f9313477770d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:17:50 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4da9b94c860a9d8d8d135f9f5b7ef7e7cb1351f6c09ab93213c7f9313477770d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:17:50 np0005544708 podman[145036]: 2025-12-03 21:17:50.810187511 +0000 UTC m=+0.160183169 container init 4e4ac3c03ab60fe859ff8aef3e8419d5674db81474fc7e7ec6a8be62e7f70676 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_bell, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  3 16:17:50 np0005544708 podman[145036]: 2025-12-03 21:17:50.822676893 +0000 UTC m=+0.172672551 container start 4e4ac3c03ab60fe859ff8aef3e8419d5674db81474fc7e7ec6a8be62e7f70676 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_bell, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  3 16:17:50 np0005544708 podman[145036]: 2025-12-03 21:17:50.828178569 +0000 UTC m=+0.178174197 container attach 4e4ac3c03ab60fe859ff8aef3e8419d5674db81474fc7e7ec6a8be62e7f70676 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]: {
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:    "0": [
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:        {
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:            "devices": [
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "/dev/loop3"
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:            ],
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:            "lv_name": "ceph_lv0",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:            "lv_size": "21470642176",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d33bc95-baf8-481d-bc78-3b15ffd29872,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:            "lv_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:            "name": "ceph_lv0",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:            "tags": {
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.block_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.cluster_name": "ceph",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.crush_device_class": "",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.encrypted": "0",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.objectstore": "bluestore",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.osd_fsid": "4d33bc95-baf8-481d-bc78-3b15ffd29872",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.osd_id": "0",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.type": "block",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.vdo": "0",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.with_tpm": "0"
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:            },
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:            "type": "block",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:            "vg_name": "ceph_vg0"
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:        }
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:    ],
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:    "1": [
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:        {
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:            "devices": [
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "/dev/loop4"
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:            ],
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:            "lv_name": "ceph_lv1",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:            "lv_size": "21470642176",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c4086f1b-ff53-4e63-8dc0-011238d77976,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:            "lv_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:            "name": "ceph_lv1",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:            "tags": {
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.block_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.cluster_name": "ceph",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.crush_device_class": "",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.encrypted": "0",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.objectstore": "bluestore",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.osd_fsid": "c4086f1b-ff53-4e63-8dc0-011238d77976",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.osd_id": "1",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.type": "block",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.vdo": "0",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.with_tpm": "0"
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:            },
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:            "type": "block",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:            "vg_name": "ceph_vg1"
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:        }
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:    ],
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:    "2": [
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:        {
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:            "devices": [
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "/dev/loop5"
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:            ],
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:            "lv_name": "ceph_lv2",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:            "lv_size": "21470642176",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=abcd6a67-9013-4470-978f-f75da5f33cd4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:            "lv_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:            "name": "ceph_lv2",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:            "tags": {
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.block_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.cluster_name": "ceph",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.crush_device_class": "",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.encrypted": "0",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.objectstore": "bluestore",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.osd_fsid": "abcd6a67-9013-4470-978f-f75da5f33cd4",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.osd_id": "2",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.type": "block",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.vdo": "0",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:                "ceph.with_tpm": "0"
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:            },
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:            "type": "block",
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:            "vg_name": "ceph_vg2"
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:        }
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]:    ]
Dec  3 16:17:51 np0005544708 dazzling_bell[145076]: }
Dec  3 16:17:51 np0005544708 systemd[1]: libpod-4e4ac3c03ab60fe859ff8aef3e8419d5674db81474fc7e7ec6a8be62e7f70676.scope: Deactivated successfully.
Dec  3 16:17:51 np0005544708 podman[145036]: 2025-12-03 21:17:51.180129275 +0000 UTC m=+0.530124923 container died 4e4ac3c03ab60fe859ff8aef3e8419d5674db81474fc7e7ec6a8be62e7f70676 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_bell, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec  3 16:17:51 np0005544708 systemd[1]: var-lib-containers-storage-overlay-4da9b94c860a9d8d8d135f9f5b7ef7e7cb1351f6c09ab93213c7f9313477770d-merged.mount: Deactivated successfully.
Dec  3 16:17:51 np0005544708 podman[145036]: 2025-12-03 21:17:51.261853637 +0000 UTC m=+0.611849295 container remove 4e4ac3c03ab60fe859ff8aef3e8419d5674db81474fc7e7ec6a8be62e7f70676 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_bell, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:17:51 np0005544708 systemd[1]: libpod-conmon-4e4ac3c03ab60fe859ff8aef3e8419d5674db81474fc7e7ec6a8be62e7f70676.scope: Deactivated successfully.
Dec  3 16:17:51 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v338: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:17:51 np0005544708 python3.9[145223]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 16:17:51 np0005544708 podman[145295]: 2025-12-03 21:17:51.765864015 +0000 UTC m=+0.070971497 container create fd99bd03a3f3d1b162289898e3a64ee5bc076fc7cd9b8a6ca5555261401fdf7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_joliot, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:17:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:17:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:17:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:17:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:17:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:17:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:17:51 np0005544708 systemd[1]: Started libpod-conmon-fd99bd03a3f3d1b162289898e3a64ee5bc076fc7cd9b8a6ca5555261401fdf7f.scope.
Dec  3 16:17:51 np0005544708 podman[145295]: 2025-12-03 21:17:51.735458737 +0000 UTC m=+0.040566259 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:17:51 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:17:51 np0005544708 podman[145295]: 2025-12-03 21:17:51.859597827 +0000 UTC m=+0.164705289 container init fd99bd03a3f3d1b162289898e3a64ee5bc076fc7cd9b8a6ca5555261401fdf7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_joliot, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec  3 16:17:51 np0005544708 podman[145295]: 2025-12-03 21:17:51.867315882 +0000 UTC m=+0.172423324 container start fd99bd03a3f3d1b162289898e3a64ee5bc076fc7cd9b8a6ca5555261401fdf7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_joliot, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:17:51 np0005544708 podman[145295]: 2025-12-03 21:17:51.871191755 +0000 UTC m=+0.176299197 container attach fd99bd03a3f3d1b162289898e3a64ee5bc076fc7cd9b8a6ca5555261401fdf7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_joliot, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  3 16:17:51 np0005544708 sweet_joliot[145312]: 167 167
Dec  3 16:17:51 np0005544708 systemd[1]: libpod-fd99bd03a3f3d1b162289898e3a64ee5bc076fc7cd9b8a6ca5555261401fdf7f.scope: Deactivated successfully.
Dec  3 16:17:51 np0005544708 podman[145295]: 2025-12-03 21:17:51.874595736 +0000 UTC m=+0.179703188 container died fd99bd03a3f3d1b162289898e3a64ee5bc076fc7cd9b8a6ca5555261401fdf7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec  3 16:17:51 np0005544708 systemd[1]: var-lib-containers-storage-overlay-6e626d4e0be2861fe82b40b15d95e36712c9d6b1840e793d499f91f805de1b79-merged.mount: Deactivated successfully.
Dec  3 16:17:51 np0005544708 podman[145295]: 2025-12-03 21:17:51.918074641 +0000 UTC m=+0.223182093 container remove fd99bd03a3f3d1b162289898e3a64ee5bc076fc7cd9b8a6ca5555261401fdf7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_joliot, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Dec  3 16:17:51 np0005544708 systemd[1]: libpod-conmon-fd99bd03a3f3d1b162289898e3a64ee5bc076fc7cd9b8a6ca5555261401fdf7f.scope: Deactivated successfully.
Dec  3 16:17:52 np0005544708 podman[145336]: 2025-12-03 21:17:52.15881053 +0000 UTC m=+0.073291509 container create 7c8b4c2fc2dc2bee15f856c2aac4705f80193bb6764b04673810141865d95366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_shamir, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:17:52 np0005544708 podman[145336]: 2025-12-03 21:17:52.12831886 +0000 UTC m=+0.042799899 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:17:52 np0005544708 systemd[1]: Started libpod-conmon-7c8b4c2fc2dc2bee15f856c2aac4705f80193bb6764b04673810141865d95366.scope.
Dec  3 16:17:52 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:17:52 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd0917d8fa3f13220b2276e3d08590714fb8a2802e39d2769e7fbe30e5dfc076/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:17:52 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd0917d8fa3f13220b2276e3d08590714fb8a2802e39d2769e7fbe30e5dfc076/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:17:52 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd0917d8fa3f13220b2276e3d08590714fb8a2802e39d2769e7fbe30e5dfc076/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:17:52 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd0917d8fa3f13220b2276e3d08590714fb8a2802e39d2769e7fbe30e5dfc076/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:17:52 np0005544708 podman[145336]: 2025-12-03 21:17:52.310988906 +0000 UTC m=+0.225469855 container init 7c8b4c2fc2dc2bee15f856c2aac4705f80193bb6764b04673810141865d95366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_shamir, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:17:52 np0005544708 podman[145336]: 2025-12-03 21:17:52.322883992 +0000 UTC m=+0.237364981 container start 7c8b4c2fc2dc2bee15f856c2aac4705f80193bb6764b04673810141865d95366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_shamir, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Dec  3 16:17:52 np0005544708 podman[145336]: 2025-12-03 21:17:52.326988121 +0000 UTC m=+0.241469070 container attach 7c8b4c2fc2dc2bee15f856c2aac4705f80193bb6764b04673810141865d95366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:17:52 np0005544708 python3.9[145433]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 16:17:52 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:17:52 np0005544708 lvm[145509]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:17:52 np0005544708 lvm[145509]: VG ceph_vg0 finished
Dec  3 16:17:52 np0005544708 lvm[145510]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:17:52 np0005544708 lvm[145510]: VG ceph_vg1 finished
Dec  3 16:17:52 np0005544708 lvm[145511]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:17:52 np0005544708 lvm[145511]: VG ceph_vg2 finished
Dec  3 16:17:53 np0005544708 exciting_shamir[145400]: {}
Dec  3 16:17:53 np0005544708 systemd[1]: libpod-7c8b4c2fc2dc2bee15f856c2aac4705f80193bb6764b04673810141865d95366.scope: Deactivated successfully.
Dec  3 16:17:53 np0005544708 podman[145336]: 2025-12-03 21:17:53.110609161 +0000 UTC m=+1.025090100 container died 7c8b4c2fc2dc2bee15f856c2aac4705f80193bb6764b04673810141865d95366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_shamir, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec  3 16:17:53 np0005544708 systemd[1]: libpod-7c8b4c2fc2dc2bee15f856c2aac4705f80193bb6764b04673810141865d95366.scope: Consumed 1.365s CPU time.
Dec  3 16:17:53 np0005544708 systemd[1]: var-lib-containers-storage-overlay-fd0917d8fa3f13220b2276e3d08590714fb8a2802e39d2769e7fbe30e5dfc076-merged.mount: Deactivated successfully.
Dec  3 16:17:53 np0005544708 podman[145336]: 2025-12-03 21:17:53.151851198 +0000 UTC m=+1.066332137 container remove 7c8b4c2fc2dc2bee15f856c2aac4705f80193bb6764b04673810141865d95366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_shamir, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:17:53 np0005544708 systemd[1]: libpod-conmon-7c8b4c2fc2dc2bee15f856c2aac4705f80193bb6764b04673810141865d95366.scope: Deactivated successfully.
Dec  3 16:17:53 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:17:53 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:17:53 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:17:53 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:17:53 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v339: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:17:54 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:17:54 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:17:55 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v340: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:17:55 np0005544708 python3.9[145702]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  3 16:17:56 np0005544708 python3.9[145855]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:17:57 np0005544708 python3.9[145976]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764796676.0173767-138-50792853949407/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:17:57 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v341: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:17:57 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:17:57 np0005544708 python3.9[146126]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:17:58 np0005544708 python3.9[146247]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764796677.3642173-138-32180358981153/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:17:59 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v342: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:17:59 np0005544708 python3.9[146397]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:18:00 np0005544708 python3.9[146518]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764796679.2960143-182-202471294626948/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:18:00 np0005544708 ovn_controller[142340]: 2025-12-03T21:18:00Z|00025|memory|INFO|16256 kB peak resident set size after 29.7 seconds
Dec  3 16:18:00 np0005544708 ovn_controller[142340]: 2025-12-03T21:18:00Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Dec  3 16:18:00 np0005544708 podman[146519]: 2025-12-03 21:18:00.735058256 +0000 UTC m=+0.127514671 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  3 16:18:01 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v343: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:18:01 np0005544708 python3.9[146695]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:18:01 np0005544708 ceph-mon[75204]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 16:18:01 np0005544708 ceph-mon[75204]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 1811 writes, 7785 keys, 1811 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.01 MB/s#012Cumulative WAL: 1811 writes, 1811 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1811 writes, 7785 keys, 1811 commit groups, 1.0 writes per commit group, ingest: 8.44 MB, 0.01 MB/s#012Interval WAL: 1811 writes, 1811 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     92.4      0.06              0.02         3    0.021       0      0       0.0       0.0#012  L6      1/0    4.29 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    130.8    111.8      0.09              0.04         2    0.044    6034    773       0.0       0.0#012 Sum      1/0    4.29 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     75.3    103.5      0.15              0.06         5    0.030    6034    773       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.7     78.1    107.0      0.15              0.06         4    0.037    6034    773       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    130.8    111.8      0.09              0.04         2    0.044    6034    773       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    100.0      0.06              0.02         2    0.029       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.4      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.006, interval 0.006#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.02 GB write, 0.03 MB/s write, 0.01 GB read, 0.02 MB/s read, 0.2 seconds#012Interval compaction: 0.02 GB write, 0.03 MB/s write, 0.01 GB read, 0.02 MB/s read, 0.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56170d6e38d0#2 capacity: 308.00 MB usage: 510.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 5.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(37,440.69 KB,0.139727%) FilterBlock(6,24.23 KB,0.00768389%) IndexBlock(6,45.92 KB,0.0145603%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  3 16:18:01 np0005544708 python3.9[146818]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764796680.789668-182-196313411724688/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:18:02 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:18:02 np0005544708 python3.9[146968]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:18:03 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v344: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:18:03 np0005544708 python3.9[147122]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:18:04 np0005544708 python3.9[147274]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:18:05 np0005544708 python3.9[147352]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:18:05 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v345: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:18:05 np0005544708 python3.9[147504]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:18:06 np0005544708 python3.9[147582]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:18:07 np0005544708 python3.9[147734]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:18:07 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v346: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:18:07 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:18:07 np0005544708 python3.9[147886]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:18:08 np0005544708 python3.9[147964]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:18:09 np0005544708 python3.9[148116]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:18:09 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v347: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:18:09 np0005544708 python3.9[148194]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:18:10 np0005544708 python3.9[148346]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:18:10 np0005544708 systemd[1]: Reloading.
Dec  3 16:18:10 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:18:10 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:18:11 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v348: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:18:11 np0005544708 python3.9[148535]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:18:12 np0005544708 python3.9[148613]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:18:12 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:18:13 np0005544708 python3.9[148765]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:18:13 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v349: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:18:13 np0005544708 python3.9[148843]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:18:15 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v350: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:18:16 np0005544708 python3.9[148995]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:18:16 np0005544708 systemd[1]: Reloading.
Dec  3 16:18:16 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:18:16 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:18:16 np0005544708 systemd[1]: Starting Create netns directory...
Dec  3 16:18:16 np0005544708 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  3 16:18:16 np0005544708 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  3 16:18:16 np0005544708 systemd[1]: Finished Create netns directory.
Dec  3 16:18:17 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v351: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:18:17 np0005544708 python3.9[149189]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:18:17 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:18:18 np0005544708 python3.9[149341]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:18:18 np0005544708 python3.9[149464]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764796697.72217-333-100123149994289/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:18:19 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v352: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:18:19 np0005544708 python3.9[149616]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:18:20 np0005544708 python3.9[149768]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:18:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Optimize plan auto_2025-12-03_21:18:21
Dec  3 16:18:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 16:18:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] do_upmap
Dec  3 16:18:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', '.mgr', 'backups', 'vms']
Dec  3 16:18:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] prepared 0/10 upmap changes
Dec  3 16:18:21 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v353: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:18:21 np0005544708 python3.9[149891]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764796700.206883-358-230066662419595/.source.json _original_basename=.4naw9mvp follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:18:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:18:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:18:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:18:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:18:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:18:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:18:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 16:18:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:18:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 16:18:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:18:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:18:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:18:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:18:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:18:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:18:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:18:22 np0005544708 python3.9[150043]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:18:22 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:18:23 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v354: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:18:24 np0005544708 python3.9[150470]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Dec  3 16:18:25 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v355: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:18:25 np0005544708 python3.9[150622]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  3 16:18:26 np0005544708 python3.9[150774]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  3 16:18:27 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v356: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:18:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 16:18:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:18:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 16:18:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:18:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:18:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:18:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:18:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:18:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:18:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:18:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:18:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:18:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.7344613959254126e-06 of space, bias 4.0, pg target 0.002081353675110495 quantized to 16 (current 16)
Dec  3 16:18:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:18:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:18:27 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:18:28 np0005544708 python3[150953]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  3 16:18:29 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v357: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:18:31 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v358: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:18:32 np0005544708 podman[151017]: 2025-12-03 21:18:32.228360818 +0000 UTC m=+1.169000155 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  3 16:18:32 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:18:33 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v359: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:18:35 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v360: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:18:37 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v361: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:18:39 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v362: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:18:39 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:18:39 np0005544708 podman[150967]: 2025-12-03 21:18:39.970323756 +0000 UTC m=+11.174990797 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  3 16:18:40 np0005544708 podman[151112]: 2025-12-03 21:18:40.192712948 +0000 UTC m=+0.080856750 container create ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true)
Dec  3 16:18:40 np0005544708 podman[151112]: 2025-12-03 21:18:40.152724535 +0000 UTC m=+0.040868377 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  3 16:18:40 np0005544708 python3[150953]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  3 16:18:41 np0005544708 python3.9[151304]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:18:41 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v363: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:18:42 np0005544708 python3.9[151458]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:18:42 np0005544708 python3.9[151534]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:18:43 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v364: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:18:43 np0005544708 python3.9[151686]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764796722.7850075-446-184115650515974/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:18:44 np0005544708 python3.9[151763]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  3 16:18:44 np0005544708 systemd[1]: Reloading.
Dec  3 16:18:44 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:18:44 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:18:44 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:18:45 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v365: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:18:45 np0005544708 python3.9[151874]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:18:46 np0005544708 systemd[1]: Reloading.
Dec  3 16:18:46 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:18:46 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:18:46 np0005544708 systemd[1]: Starting ovn_metadata_agent container...
Dec  3 16:18:46 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:18:46 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1af3e6be989aae80c7fa7a27b5aa50e0661b0544c6b5b89d7c1402b58f0905a/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec  3 16:18:46 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1af3e6be989aae80c7fa7a27b5aa50e0661b0544c6b5b89d7c1402b58f0905a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  3 16:18:46 np0005544708 systemd[1]: Started /usr/bin/podman healthcheck run ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c.
Dec  3 16:18:46 np0005544708 podman[151917]: 2025-12-03 21:18:46.996288428 +0000 UTC m=+0.180510544 container init ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  3 16:18:47 np0005544708 ovn_metadata_agent[151932]: + sudo -E kolla_set_configs
Dec  3 16:18:47 np0005544708 podman[151917]: 2025-12-03 21:18:47.032562637 +0000 UTC m=+0.216784753 container start ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  3 16:18:47 np0005544708 edpm-start-podman-container[151917]: ovn_metadata_agent
Dec  3 16:18:47 np0005544708 podman[151938]: 2025-12-03 21:18:47.121160932 +0000 UTC m=+0.070492016 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  3 16:18:47 np0005544708 edpm-start-podman-container[151916]: Creating additional drop-in dependency for "ovn_metadata_agent" (ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c)
Dec  3 16:18:47 np0005544708 ovn_metadata_agent[151932]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  3 16:18:47 np0005544708 ovn_metadata_agent[151932]: INFO:__main__:Validating config file
Dec  3 16:18:47 np0005544708 ovn_metadata_agent[151932]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  3 16:18:47 np0005544708 ovn_metadata_agent[151932]: INFO:__main__:Copying service configuration files
Dec  3 16:18:47 np0005544708 ovn_metadata_agent[151932]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec  3 16:18:47 np0005544708 ovn_metadata_agent[151932]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec  3 16:18:47 np0005544708 ovn_metadata_agent[151932]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec  3 16:18:47 np0005544708 ovn_metadata_agent[151932]: INFO:__main__:Writing out command to execute
Dec  3 16:18:47 np0005544708 ovn_metadata_agent[151932]: INFO:__main__:Setting permission for /var/lib/neutron
Dec  3 16:18:47 np0005544708 ovn_metadata_agent[151932]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec  3 16:18:47 np0005544708 ovn_metadata_agent[151932]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec  3 16:18:47 np0005544708 ovn_metadata_agent[151932]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec  3 16:18:47 np0005544708 ovn_metadata_agent[151932]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec  3 16:18:47 np0005544708 ovn_metadata_agent[151932]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec  3 16:18:47 np0005544708 ovn_metadata_agent[151932]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec  3 16:18:47 np0005544708 ovn_metadata_agent[151932]: ++ cat /run_command
Dec  3 16:18:47 np0005544708 systemd[1]: Reloading.
Dec  3 16:18:47 np0005544708 ovn_metadata_agent[151932]: + CMD=neutron-ovn-metadata-agent
Dec  3 16:18:47 np0005544708 ovn_metadata_agent[151932]: + ARGS=
Dec  3 16:18:47 np0005544708 ovn_metadata_agent[151932]: + sudo kolla_copy_cacerts
Dec  3 16:18:47 np0005544708 ovn_metadata_agent[151932]: + [[ ! -n '' ]]
Dec  3 16:18:47 np0005544708 ovn_metadata_agent[151932]: + . kolla_extend_start
Dec  3 16:18:47 np0005544708 ovn_metadata_agent[151932]: Running command: 'neutron-ovn-metadata-agent'
Dec  3 16:18:47 np0005544708 ovn_metadata_agent[151932]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Dec  3 16:18:47 np0005544708 ovn_metadata_agent[151932]: + umask 0022
Dec  3 16:18:47 np0005544708 ovn_metadata_agent[151932]: + exec neutron-ovn-metadata-agent
Dec  3 16:18:47 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:18:47 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:18:47 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v366: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:18:47 np0005544708 systemd[1]: Started ovn_metadata_agent container.
Dec  3 16:18:47 np0005544708 systemd[1]: session-47.scope: Deactivated successfully.
Dec  3 16:18:47 np0005544708 systemd[1]: session-47.scope: Consumed 1min 1.553s CPU time.
Dec  3 16:18:47 np0005544708 systemd-logind[787]: Session 47 logged out. Waiting for processes to exit.
Dec  3 16:18:47 np0005544708 systemd-logind[787]: Removed session 47.
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.885 151937 INFO neutron.common.config [-] Logging enabled!#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.885 151937 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.885 151937 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.886 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.886 151937 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.886 151937 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.886 151937 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.886 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.887 151937 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.887 151937 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.887 151937 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.887 151937 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.887 151937 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.887 151937 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.887 151937 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.887 151937 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.887 151937 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.888 151937 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.888 151937 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.888 151937 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.888 151937 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.888 151937 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.888 151937 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.888 151937 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.888 151937 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.888 151937 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.889 151937 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.889 151937 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.889 151937 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.889 151937 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.889 151937 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.889 151937 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.889 151937 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.889 151937 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.889 151937 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.890 151937 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.890 151937 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.890 151937 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.890 151937 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.890 151937 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.890 151937 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.890 151937 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.890 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.890 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.890 151937 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.891 151937 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.891 151937 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.891 151937 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.891 151937 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.891 151937 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.891 151937 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.891 151937 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.891 151937 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.891 151937 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.891 151937 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.892 151937 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.892 151937 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.892 151937 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.892 151937 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.892 151937 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.892 151937 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.892 151937 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.892 151937 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.892 151937 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.893 151937 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.893 151937 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.893 151937 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.893 151937 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.893 151937 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.893 151937 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.893 151937 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.893 151937 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.893 151937 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.893 151937 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.894 151937 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.894 151937 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.894 151937 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.894 151937 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.894 151937 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.894 151937 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.894 151937 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.894 151937 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.894 151937 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.894 151937 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.895 151937 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.895 151937 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.895 151937 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.895 151937 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.895 151937 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.895 151937 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.895 151937 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.895 151937 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.896 151937 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.896 151937 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.896 151937 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.896 151937 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.896 151937 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.896 151937 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.896 151937 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.896 151937 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.896 151937 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.897 151937 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.897 151937 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.897 151937 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.897 151937 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.897 151937 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.897 151937 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.897 151937 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.897 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.898 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.898 151937 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.898 151937 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.898 151937 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.898 151937 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.898 151937 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.898 151937 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.899 151937 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.899 151937 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.899 151937 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.899 151937 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.899 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.899 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.899 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.899 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.900 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.900 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.900 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.900 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.900 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.900 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.900 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.900 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.900 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.901 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.901 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.901 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.901 151937 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.901 151937 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.901 151937 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.901 151937 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.901 151937 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.902 151937 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.902 151937 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.902 151937 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.902 151937 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.902 151937 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.902 151937 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.902 151937 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.902 151937 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.902 151937 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.902 151937 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.903 151937 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.903 151937 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.903 151937 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.903 151937 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.903 151937 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.903 151937 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.903 151937 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.903 151937 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.903 151937 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.903 151937 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.904 151937 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.904 151937 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.904 151937 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.904 151937 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.904 151937 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.904 151937 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.904 151937 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.904 151937 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.904 151937 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.905 151937 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.905 151937 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.905 151937 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.905 151937 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.905 151937 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.905 151937 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.905 151937 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.905 151937 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.905 151937 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.905 151937 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.906 151937 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.906 151937 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.906 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.906 151937 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.906 151937 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.906 151937 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.906 151937 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.906 151937 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.906 151937 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.907 151937 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.907 151937 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.907 151937 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.907 151937 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.907 151937 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.907 151937 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.907 151937 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.907 151937 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.907 151937 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.908 151937 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.908 151937 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.908 151937 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.908 151937 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.908 151937 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.908 151937 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.908 151937 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.908 151937 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.908 151937 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.909 151937 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.909 151937 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.909 151937 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.909 151937 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.909 151937 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.909 151937 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.909 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.909 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.909 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.910 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.910 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.910 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.910 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.910 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.910 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.910 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.910 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.910 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.910 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.911 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.911 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.911 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.911 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.911 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.911 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.911 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.911 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.911 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.911 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.912 151937 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.912 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.912 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.912 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.912 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.912 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.912 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.912 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.912 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.913 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.913 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.913 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.913 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.913 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.913 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.913 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.913 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.914 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.914 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.914 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.914 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.914 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.914 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.915 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.915 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.915 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.915 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.915 151937 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.915 151937 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.916 151937 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.916 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.916 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.916 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.916 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.916 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.917 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.917 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.917 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.917 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.917 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.917 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.918 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.918 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.918 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.918 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.918 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.918 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.918 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.918 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.919 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.919 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.919 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.919 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.919 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.919 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.919 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.920 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.920 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.920 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.920 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.920 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.920 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.920 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.920 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.920 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.921 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.921 151937 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.921 151937 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.930 151937 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.930 151937 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.930 151937 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.930 151937 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.930 151937 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.944 151937 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name f27c01e7-5b62-4209-a664-3ae50b74644d (UUID: f27c01e7-5b62-4209-a664-3ae50b74644d) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.965 151937 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.966 151937 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.966 151937 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.966 151937 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.968 151937 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.974 151937 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.978 151937 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'f27c01e7-5b62-4209-a664-3ae50b74644d'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fcf8e279af0>], external_ids={}, name=f27c01e7-5b62-4209-a664-3ae50b74644d, nb_cfg_timestamp=1764796659125, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.978 151937 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fcf8e27cb20>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.979 151937 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.979 151937 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.979 151937 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.979 151937 INFO oslo_service.service [-] Starting 1 workers#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.983 151937 DEBUG oslo_service.service [-] Started child 152048 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.987 151937 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpy_tfhlj0/privsep.sock']#033[00m
Dec  3 16:18:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:48.989 152048 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-429722'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Dec  3 16:18:49 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:49.027 152048 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Dec  3 16:18:49 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:49.028 152048 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Dec  3 16:18:49 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:49.028 152048 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  3 16:18:49 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:49.033 152048 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Dec  3 16:18:49 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:49.042 152048 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Dec  3 16:18:49 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:49.051 152048 INFO eventlet.wsgi.server [-] (152048) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Dec  3 16:18:49 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v367: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:18:49 np0005544708 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Dec  3 16:18:49 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:18:49 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:49.667 151937 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec  3 16:18:49 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:49.668 151937 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpy_tfhlj0/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec  3 16:18:49 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:49.550 152053 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  3 16:18:49 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:49.553 152053 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  3 16:18:49 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:49.555 152053 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Dec  3 16:18:49 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:49.555 152053 INFO oslo.privsep.daemon [-] privsep daemon running as pid 152053#033[00m
Dec  3 16:18:49 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:49.670 152053 DEBUG oslo.privsep.daemon [-] privsep: reply[88d87757-c1da-4f20-b8da-8be9d4811535]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.136 152053 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.137 152053 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.137 152053 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.637 152053 DEBUG oslo.privsep.daemon [-] privsep: reply[e3bb22a8-63ee-4972-8b04-f7010411a591]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.641 151937 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=f27c01e7-5b62-4209-a664-3ae50b74644d, column=external_ids, values=({'neutron:ovn-metadata-id': '50a49342-76af-5160-b4ff-e6b2680e1d47'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.656 151937 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f27c01e7-5b62-4209-a664-3ae50b74644d, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.665 151937 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.665 151937 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.665 151937 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.665 151937 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.666 151937 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.666 151937 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.666 151937 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.666 151937 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.667 151937 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.667 151937 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.667 151937 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.667 151937 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.668 151937 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.668 151937 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.668 151937 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.668 151937 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.669 151937 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.669 151937 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.669 151937 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.669 151937 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.670 151937 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.670 151937 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.670 151937 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.670 151937 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.671 151937 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.671 151937 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.671 151937 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.672 151937 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.672 151937 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.672 151937 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.672 151937 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.672 151937 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.673 151937 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.673 151937 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.673 151937 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.674 151937 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.674 151937 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.674 151937 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.674 151937 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.675 151937 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.675 151937 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.675 151937 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.675 151937 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.676 151937 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.676 151937 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.676 151937 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.676 151937 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.676 151937 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.677 151937 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.677 151937 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.677 151937 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.677 151937 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.678 151937 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.678 151937 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.678 151937 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.678 151937 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.678 151937 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.679 151937 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.679 151937 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.679 151937 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.679 151937 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.679 151937 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.680 151937 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.680 151937 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.680 151937 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.680 151937 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.681 151937 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.681 151937 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.681 151937 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.681 151937 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.681 151937 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.682 151937 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.682 151937 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.682 151937 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.682 151937 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.683 151937 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.683 151937 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.683 151937 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.683 151937 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.684 151937 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.684 151937 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.684 151937 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.684 151937 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.684 151937 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.685 151937 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.685 151937 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.685 151937 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.685 151937 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.686 151937 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.686 151937 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.686 151937 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.686 151937 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.686 151937 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.687 151937 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.687 151937 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.687 151937 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.687 151937 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.688 151937 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.688 151937 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.688 151937 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.688 151937 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.688 151937 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.689 151937 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.689 151937 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.689 151937 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.689 151937 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.689 151937 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.690 151937 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.690 151937 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.690 151937 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.691 151937 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.691 151937 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.691 151937 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.691 151937 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.691 151937 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.692 151937 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.692 151937 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.692 151937 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.692 151937 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.693 151937 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.693 151937 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.693 151937 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.693 151937 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.694 151937 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.694 151937 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.694 151937 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.694 151937 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.695 151937 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.695 151937 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.695 151937 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.695 151937 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.696 151937 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.696 151937 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.696 151937 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.696 151937 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.697 151937 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.697 151937 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.697 151937 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.697 151937 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.697 151937 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.698 151937 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.698 151937 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.698 151937 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.698 151937 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.699 151937 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.699 151937 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.699 151937 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.699 151937 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.699 151937 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.700 151937 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.700 151937 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.700 151937 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.700 151937 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.701 151937 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.701 151937 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.701 151937 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.701 151937 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.701 151937 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.702 151937 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.702 151937 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.702 151937 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.702 151937 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.702 151937 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.703 151937 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.703 151937 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.703 151937 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.703 151937 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.703 151937 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.704 151937 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.704 151937 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.704 151937 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.704 151937 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.705 151937 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.705 151937 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.705 151937 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.705 151937 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.705 151937 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.706 151937 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.706 151937 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.706 151937 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.706 151937 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.707 151937 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.707 151937 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.707 151937 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.707 151937 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.707 151937 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.708 151937 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.708 151937 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.708 151937 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.708 151937 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.708 151937 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.708 151937 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.708 151937 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.709 151937 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.709 151937 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.709 151937 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.709 151937 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.709 151937 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.709 151937 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.709 151937 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.709 151937 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.710 151937 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.710 151937 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.710 151937 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.710 151937 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.710 151937 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.710 151937 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.710 151937 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.711 151937 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.711 151937 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.711 151937 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.711 151937 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.711 151937 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.711 151937 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.711 151937 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.711 151937 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.712 151937 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.712 151937 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.712 151937 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.712 151937 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.712 151937 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.712 151937 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.712 151937 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.713 151937 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.713 151937 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.713 151937 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.713 151937 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.713 151937 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.713 151937 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.713 151937 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.713 151937 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.714 151937 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.714 151937 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.714 151937 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.714 151937 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.714 151937 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.714 151937 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.714 151937 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.715 151937 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.715 151937 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.715 151937 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.715 151937 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.715 151937 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.715 151937 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.715 151937 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.716 151937 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.716 151937 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.716 151937 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.716 151937 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.716 151937 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.716 151937 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.716 151937 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.717 151937 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.717 151937 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.717 151937 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.717 151937 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.717 151937 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.717 151937 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.717 151937 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.718 151937 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.718 151937 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.718 151937 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.718 151937 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.718 151937 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.718 151937 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.718 151937 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.718 151937 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.719 151937 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.719 151937 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.719 151937 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.719 151937 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.719 151937 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.719 151937 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.719 151937 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.720 151937 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.720 151937 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.720 151937 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.720 151937 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.720 151937 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.720 151937 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.720 151937 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.721 151937 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.721 151937 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.721 151937 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.721 151937 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.721 151937 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.721 151937 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.721 151937 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.722 151937 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.722 151937 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.722 151937 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.722 151937 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.722 151937 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.722 151937 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.722 151937 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.723 151937 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.723 151937 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.723 151937 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.723 151937 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.723 151937 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.723 151937 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.723 151937 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.724 151937 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:18:50 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:18:50.724 151937 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  3 16:18:51 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v368: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:18:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:18:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:18:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:18:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:18:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:18:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:18:53 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v369: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:18:53 np0005544708 systemd-logind[787]: New session 48 of user zuul.
Dec  3 16:18:53 np0005544708 systemd[1]: Started Session 48 of User zuul.
Dec  3 16:18:53 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:18:53 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:18:53 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:18:53 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:18:54 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:18:54 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:18:54 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:18:54 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:18:54 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  3 16:18:54 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:18:54 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  3 16:18:54 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:18:54 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  3 16:18:54 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  3 16:18:54 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  3 16:18:54 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:18:54 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:18:54 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:18:54 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:18:54 np0005544708 python3.9[152364]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 16:18:55 np0005544708 podman[152431]: 2025-12-03 21:18:55.086491382 +0000 UTC m=+0.038940159 container create 81eccddf2beb9b976cd1390798ba0a02b976e63bda0fe2e9919e404d74cc33bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_mclaren, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:18:55 np0005544708 systemd[1]: Started libpod-conmon-81eccddf2beb9b976cd1390798ba0a02b976e63bda0fe2e9919e404d74cc33bc.scope.
Dec  3 16:18:55 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:18:55 np0005544708 podman[152431]: 2025-12-03 21:18:55.068200549 +0000 UTC m=+0.020649366 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:18:55 np0005544708 podman[152431]: 2025-12-03 21:18:55.182394666 +0000 UTC m=+0.134843543 container init 81eccddf2beb9b976cd1390798ba0a02b976e63bda0fe2e9919e404d74cc33bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_mclaren, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:18:55 np0005544708 podman[152431]: 2025-12-03 21:18:55.190374852 +0000 UTC m=+0.142823659 container start 81eccddf2beb9b976cd1390798ba0a02b976e63bda0fe2e9919e404d74cc33bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_mclaren, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:18:55 np0005544708 podman[152431]: 2025-12-03 21:18:55.195410543 +0000 UTC m=+0.147859420 container attach 81eccddf2beb9b976cd1390798ba0a02b976e63bda0fe2e9919e404d74cc33bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_mclaren, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 16:18:55 np0005544708 magical_mclaren[152461]: 167 167
Dec  3 16:18:55 np0005544708 systemd[1]: libpod-81eccddf2beb9b976cd1390798ba0a02b976e63bda0fe2e9919e404d74cc33bc.scope: Deactivated successfully.
Dec  3 16:18:55 np0005544708 conmon[152461]: conmon 81eccddf2beb9b976cd1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-81eccddf2beb9b976cd1390798ba0a02b976e63bda0fe2e9919e404d74cc33bc.scope/container/memory.events
Dec  3 16:18:55 np0005544708 podman[152431]: 2025-12-03 21:18:55.201129301 +0000 UTC m=+0.153578098 container died 81eccddf2beb9b976cd1390798ba0a02b976e63bda0fe2e9919e404d74cc33bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_mclaren, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 16:18:55 np0005544708 systemd[1]: var-lib-containers-storage-overlay-aa0f619a1051df77f218b259db3f5c0db0c3a8c998233f29d06187bc5b0682eb-merged.mount: Deactivated successfully.
Dec  3 16:18:55 np0005544708 podman[152431]: 2025-12-03 21:18:55.273183097 +0000 UTC m=+0.225631904 container remove 81eccddf2beb9b976cd1390798ba0a02b976e63bda0fe2e9919e404d74cc33bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:18:55 np0005544708 systemd[1]: libpod-conmon-81eccddf2beb9b976cd1390798ba0a02b976e63bda0fe2e9919e404d74cc33bc.scope: Deactivated successfully.
Dec  3 16:18:55 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v370: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:18:55 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:18:55 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:18:55 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:18:55 np0005544708 podman[152527]: 2025-12-03 21:18:55.495364328 +0000 UTC m=+0.057967261 container create 8144338f30fe8fc144bbe15c2f24c0fc33f13e7631104cdcc9b73d81089e450c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_cartwright, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default)
Dec  3 16:18:55 np0005544708 systemd[1]: Started libpod-conmon-8144338f30fe8fc144bbe15c2f24c0fc33f13e7631104cdcc9b73d81089e450c.scope.
Dec  3 16:18:55 np0005544708 podman[152527]: 2025-12-03 21:18:55.467485956 +0000 UTC m=+0.030088949 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:18:55 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:18:55 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a877390519504be18c491692cfcf9f91a9674efbcc500ccc96a9e96b9c119395/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:18:55 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a877390519504be18c491692cfcf9f91a9674efbcc500ccc96a9e96b9c119395/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:18:55 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a877390519504be18c491692cfcf9f91a9674efbcc500ccc96a9e96b9c119395/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:18:55 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a877390519504be18c491692cfcf9f91a9674efbcc500ccc96a9e96b9c119395/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:18:55 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a877390519504be18c491692cfcf9f91a9674efbcc500ccc96a9e96b9c119395/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:18:55 np0005544708 podman[152527]: 2025-12-03 21:18:55.589726941 +0000 UTC m=+0.152329894 container init 8144338f30fe8fc144bbe15c2f24c0fc33f13e7631104cdcc9b73d81089e450c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_cartwright, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 16:18:55 np0005544708 podman[152527]: 2025-12-03 21:18:55.599826373 +0000 UTC m=+0.162429286 container start 8144338f30fe8fc144bbe15c2f24c0fc33f13e7631104cdcc9b73d81089e450c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec  3 16:18:55 np0005544708 podman[152527]: 2025-12-03 21:18:55.603405186 +0000 UTC m=+0.166008099 container attach 8144338f30fe8fc144bbe15c2f24c0fc33f13e7631104cdcc9b73d81089e450c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_cartwright, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec  3 16:18:56 np0005544708 objective_cartwright[152564]: --> passed data devices: 0 physical, 3 LVM
Dec  3 16:18:56 np0005544708 objective_cartwright[152564]: --> All data devices are unavailable
Dec  3 16:18:56 np0005544708 python3.9[152648]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:18:56 np0005544708 systemd[1]: libpod-8144338f30fe8fc144bbe15c2f24c0fc33f13e7631104cdcc9b73d81089e450c.scope: Deactivated successfully.
Dec  3 16:18:56 np0005544708 podman[152527]: 2025-12-03 21:18:56.137896154 +0000 UTC m=+0.700499067 container died 8144338f30fe8fc144bbe15c2f24c0fc33f13e7631104cdcc9b73d81089e450c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:18:56 np0005544708 systemd[1]: var-lib-containers-storage-overlay-a877390519504be18c491692cfcf9f91a9674efbcc500ccc96a9e96b9c119395-merged.mount: Deactivated successfully.
Dec  3 16:18:56 np0005544708 podman[152527]: 2025-12-03 21:18:56.173636448 +0000 UTC m=+0.736239361 container remove 8144338f30fe8fc144bbe15c2f24c0fc33f13e7631104cdcc9b73d81089e450c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_cartwright, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:18:56 np0005544708 systemd[1]: libpod-conmon-8144338f30fe8fc144bbe15c2f24c0fc33f13e7631104cdcc9b73d81089e450c.scope: Deactivated successfully.
Dec  3 16:18:56 np0005544708 podman[152792]: 2025-12-03 21:18:56.720039636 +0000 UTC m=+0.048917108 container create edaaeea4f3309de9236a7ef77a7beae511e180bf0c4c4743f150f5b98edd1f46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_yonath, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:18:56 np0005544708 systemd[1]: Started libpod-conmon-edaaeea4f3309de9236a7ef77a7beae511e180bf0c4c4743f150f5b98edd1f46.scope.
Dec  3 16:18:56 np0005544708 podman[152792]: 2025-12-03 21:18:56.699199526 +0000 UTC m=+0.028077038 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:18:56 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:18:56 np0005544708 podman[152792]: 2025-12-03 21:18:56.816201465 +0000 UTC m=+0.145078937 container init edaaeea4f3309de9236a7ef77a7beae511e180bf0c4c4743f150f5b98edd1f46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_yonath, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  3 16:18:56 np0005544708 podman[152792]: 2025-12-03 21:18:56.824010567 +0000 UTC m=+0.152888039 container start edaaeea4f3309de9236a7ef77a7beae511e180bf0c4c4743f150f5b98edd1f46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_yonath, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec  3 16:18:56 np0005544708 podman[152792]: 2025-12-03 21:18:56.827422305 +0000 UTC m=+0.156299777 container attach edaaeea4f3309de9236a7ef77a7beae511e180bf0c4c4743f150f5b98edd1f46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec  3 16:18:56 np0005544708 relaxed_yonath[152828]: 167 167
Dec  3 16:18:56 np0005544708 podman[152792]: 2025-12-03 21:18:56.830079524 +0000 UTC m=+0.158956996 container died edaaeea4f3309de9236a7ef77a7beae511e180bf0c4c4743f150f5b98edd1f46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_yonath, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:18:56 np0005544708 systemd[1]: libpod-edaaeea4f3309de9236a7ef77a7beae511e180bf0c4c4743f150f5b98edd1f46.scope: Deactivated successfully.
Dec  3 16:18:56 np0005544708 systemd[1]: var-lib-containers-storage-overlay-db412f71611247c22ef5cb00af7a723e21d90c73b551a069249abc0cfeafc806-merged.mount: Deactivated successfully.
Dec  3 16:18:56 np0005544708 podman[152792]: 2025-12-03 21:18:56.871643551 +0000 UTC m=+0.200521023 container remove edaaeea4f3309de9236a7ef77a7beae511e180bf0c4c4743f150f5b98edd1f46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_yonath, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec  3 16:18:56 np0005544708 systemd[1]: libpod-conmon-edaaeea4f3309de9236a7ef77a7beae511e180bf0c4c4743f150f5b98edd1f46.scope: Deactivated successfully.
Dec  3 16:18:57 np0005544708 podman[152863]: 2025-12-03 21:18:57.087849348 +0000 UTC m=+0.059451740 container create b0b2866adbd1213bfeac11d3817b4f29296e2a90ab8dfbf64aa3e135e909f5ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_almeida, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:18:57 np0005544708 systemd[1]: Started libpod-conmon-b0b2866adbd1213bfeac11d3817b4f29296e2a90ab8dfbf64aa3e135e909f5ce.scope.
Dec  3 16:18:57 np0005544708 podman[152863]: 2025-12-03 21:18:57.06669901 +0000 UTC m=+0.038301412 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:18:57 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:18:57 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df5f8525e5015a1dd0dd5bff8012fd7b1ff10277f99fa277610ad80f947c5c1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:18:57 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df5f8525e5015a1dd0dd5bff8012fd7b1ff10277f99fa277610ad80f947c5c1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:18:57 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df5f8525e5015a1dd0dd5bff8012fd7b1ff10277f99fa277610ad80f947c5c1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:18:57 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df5f8525e5015a1dd0dd5bff8012fd7b1ff10277f99fa277610ad80f947c5c1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:18:57 np0005544708 podman[152863]: 2025-12-03 21:18:57.188252927 +0000 UTC m=+0.159855379 container init b0b2866adbd1213bfeac11d3817b4f29296e2a90ab8dfbf64aa3e135e909f5ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_almeida, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:18:57 np0005544708 podman[152863]: 2025-12-03 21:18:57.201537621 +0000 UTC m=+0.173139983 container start b0b2866adbd1213bfeac11d3817b4f29296e2a90ab8dfbf64aa3e135e909f5ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  3 16:18:57 np0005544708 podman[152863]: 2025-12-03 21:18:57.205185806 +0000 UTC m=+0.176788248 container attach b0b2866adbd1213bfeac11d3817b4f29296e2a90ab8dfbf64aa3e135e909f5ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_almeida, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 16:18:57 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v371: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]: {
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:    "0": [
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:        {
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:            "devices": [
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "/dev/loop3"
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:            ],
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:            "lv_name": "ceph_lv0",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:            "lv_size": "21470642176",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d33bc95-baf8-481d-bc78-3b15ffd29872,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:            "lv_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:            "name": "ceph_lv0",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:            "tags": {
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.block_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.cluster_name": "ceph",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.crush_device_class": "",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.encrypted": "0",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.objectstore": "bluestore",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.osd_fsid": "4d33bc95-baf8-481d-bc78-3b15ffd29872",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.osd_id": "0",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.type": "block",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.vdo": "0",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.with_tpm": "0"
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:            },
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:            "type": "block",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:            "vg_name": "ceph_vg0"
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:        }
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:    ],
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:    "1": [
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:        {
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:            "devices": [
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "/dev/loop4"
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:            ],
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:            "lv_name": "ceph_lv1",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:            "lv_size": "21470642176",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c4086f1b-ff53-4e63-8dc0-011238d77976,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:            "lv_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:            "name": "ceph_lv1",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:            "tags": {
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.block_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.cluster_name": "ceph",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.crush_device_class": "",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.encrypted": "0",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.objectstore": "bluestore",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.osd_fsid": "c4086f1b-ff53-4e63-8dc0-011238d77976",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.osd_id": "1",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.type": "block",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.vdo": "0",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.with_tpm": "0"
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:            },
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:            "type": "block",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:            "vg_name": "ceph_vg1"
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:        }
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:    ],
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:    "2": [
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:        {
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:            "devices": [
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "/dev/loop5"
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:            ],
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:            "lv_name": "ceph_lv2",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:            "lv_size": "21470642176",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=abcd6a67-9013-4470-978f-f75da5f33cd4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:            "lv_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:            "name": "ceph_lv2",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:            "tags": {
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.block_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.cluster_name": "ceph",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.crush_device_class": "",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.encrypted": "0",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.objectstore": "bluestore",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.osd_fsid": "abcd6a67-9013-4470-978f-f75da5f33cd4",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.osd_id": "2",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.type": "block",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.vdo": "0",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:                "ceph.with_tpm": "0"
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:            },
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:            "type": "block",
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:            "vg_name": "ceph_vg2"
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:        }
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]:    ]
Dec  3 16:18:57 np0005544708 interesting_almeida[152879]: }
Dec  3 16:18:57 np0005544708 systemd[1]: libpod-b0b2866adbd1213bfeac11d3817b4f29296e2a90ab8dfbf64aa3e135e909f5ce.scope: Deactivated successfully.
Dec  3 16:18:57 np0005544708 podman[152863]: 2025-12-03 21:18:57.620078387 +0000 UTC m=+0.591680749 container died b0b2866adbd1213bfeac11d3817b4f29296e2a90ab8dfbf64aa3e135e909f5ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_almeida, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:18:57 np0005544708 systemd[1]: var-lib-containers-storage-overlay-df5f8525e5015a1dd0dd5bff8012fd7b1ff10277f99fa277610ad80f947c5c1e-merged.mount: Deactivated successfully.
Dec  3 16:18:57 np0005544708 podman[152863]: 2025-12-03 21:18:57.682162374 +0000 UTC m=+0.653764736 container remove b0b2866adbd1213bfeac11d3817b4f29296e2a90ab8dfbf64aa3e135e909f5ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_almeida, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:18:57 np0005544708 systemd[1]: libpod-conmon-b0b2866adbd1213bfeac11d3817b4f29296e2a90ab8dfbf64aa3e135e909f5ce.scope: Deactivated successfully.
Dec  3 16:18:57 np0005544708 python3.9[152961]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  3 16:18:57 np0005544708 systemd[1]: Reloading.
Dec  3 16:18:57 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:18:57 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:18:58 np0005544708 podman[153148]: 2025-12-03 21:18:58.427204113 +0000 UTC m=+0.067557549 container create 50d50d817d9bf90f2c101cc83a27e253e77f8e6e27478612bde8546953cb9a29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_dijkstra, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:18:58 np0005544708 systemd[1]: Started libpod-conmon-50d50d817d9bf90f2c101cc83a27e253e77f8e6e27478612bde8546953cb9a29.scope.
Dec  3 16:18:58 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:18:58 np0005544708 podman[153148]: 2025-12-03 21:18:58.402416582 +0000 UTC m=+0.042770068 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:18:58 np0005544708 podman[153148]: 2025-12-03 21:18:58.505907071 +0000 UTC m=+0.146260557 container init 50d50d817d9bf90f2c101cc83a27e253e77f8e6e27478612bde8546953cb9a29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_dijkstra, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:18:58 np0005544708 podman[153148]: 2025-12-03 21:18:58.513940049 +0000 UTC m=+0.154293525 container start 50d50d817d9bf90f2c101cc83a27e253e77f8e6e27478612bde8546953cb9a29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_dijkstra, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:18:58 np0005544708 podman[153148]: 2025-12-03 21:18:58.518084046 +0000 UTC m=+0.158437522 container attach 50d50d817d9bf90f2c101cc83a27e253e77f8e6e27478612bde8546953cb9a29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  3 16:18:58 np0005544708 busy_dijkstra[153164]: 167 167
Dec  3 16:18:58 np0005544708 systemd[1]: libpod-50d50d817d9bf90f2c101cc83a27e253e77f8e6e27478612bde8546953cb9a29.scope: Deactivated successfully.
Dec  3 16:18:58 np0005544708 podman[153148]: 2025-12-03 21:18:58.522301176 +0000 UTC m=+0.162654642 container died 50d50d817d9bf90f2c101cc83a27e253e77f8e6e27478612bde8546953cb9a29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_dijkstra, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 16:18:58 np0005544708 systemd[1]: var-lib-containers-storage-overlay-59949f01963d83f19911d0e8b7c0899340aedaa2a77b0e44414801535a1f0899-merged.mount: Deactivated successfully.
Dec  3 16:18:58 np0005544708 podman[153148]: 2025-12-03 21:18:58.573131111 +0000 UTC m=+0.213484587 container remove 50d50d817d9bf90f2c101cc83a27e253e77f8e6e27478612bde8546953cb9a29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_dijkstra, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:18:58 np0005544708 systemd[1]: libpod-conmon-50d50d817d9bf90f2c101cc83a27e253e77f8e6e27478612bde8546953cb9a29.scope: Deactivated successfully.
Dec  3 16:18:58 np0005544708 podman[153261]: 2025-12-03 21:18:58.820107545 +0000 UTC m=+0.074243032 container create 6d746c0d9901f36515318597abd149681402966f5c7a93b5d51f63757e229666 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_lumiere, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:18:58 np0005544708 systemd[1]: Started libpod-conmon-6d746c0d9901f36515318597abd149681402966f5c7a93b5d51f63757e229666.scope.
Dec  3 16:18:58 np0005544708 podman[153261]: 2025-12-03 21:18:58.790651153 +0000 UTC m=+0.044786690 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:18:58 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:18:58 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d1882832d911e92556a8534d548c3baa2b31f9dd3150afe8e8f01eeeec73072/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:18:58 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d1882832d911e92556a8534d548c3baa2b31f9dd3150afe8e8f01eeeec73072/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:18:58 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d1882832d911e92556a8534d548c3baa2b31f9dd3150afe8e8f01eeeec73072/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:18:58 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d1882832d911e92556a8534d548c3baa2b31f9dd3150afe8e8f01eeeec73072/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:18:58 np0005544708 python3.9[153255]: ansible-ansible.builtin.service_facts Invoked
Dec  3 16:18:58 np0005544708 podman[153261]: 2025-12-03 21:18:58.919328265 +0000 UTC m=+0.173463812 container init 6d746c0d9901f36515318597abd149681402966f5c7a93b5d51f63757e229666 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_lumiere, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:18:58 np0005544708 podman[153261]: 2025-12-03 21:18:58.929761374 +0000 UTC m=+0.183896861 container start 6d746c0d9901f36515318597abd149681402966f5c7a93b5d51f63757e229666 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_lumiere, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec  3 16:18:58 np0005544708 network[153299]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  3 16:18:58 np0005544708 network[153300]: 'network-scripts' will be removed from distribution in near future.
Dec  3 16:18:58 np0005544708 network[153301]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  3 16:18:59 np0005544708 podman[153261]: 2025-12-03 21:18:59.169136312 +0000 UTC m=+0.423271869 container attach 6d746c0d9901f36515318597abd149681402966f5c7a93b5d51f63757e229666 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_lumiere, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  3 16:18:59 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v372: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:18:59 np0005544708 lvm[153385]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:18:59 np0005544708 lvm[153385]: VG ceph_vg2 finished
Dec  3 16:18:59 np0005544708 lvm[153384]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:18:59 np0005544708 lvm[153384]: VG ceph_vg1 finished
Dec  3 16:18:59 np0005544708 lvm[153381]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:18:59 np0005544708 lvm[153381]: VG ceph_vg0 finished
Dec  3 16:19:00 np0005544708 romantic_lumiere[153278]: {}
Dec  3 16:19:00 np0005544708 podman[153261]: 2025-12-03 21:19:00.091761908 +0000 UTC m=+1.345897465 container died 6d746c0d9901f36515318597abd149681402966f5c7a93b5d51f63757e229666 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:19:00 np0005544708 systemd[1]: libpod-6d746c0d9901f36515318597abd149681402966f5c7a93b5d51f63757e229666.scope: Deactivated successfully.
Dec  3 16:19:00 np0005544708 systemd[1]: libpod-6d746c0d9901f36515318597abd149681402966f5c7a93b5d51f63757e229666.scope: Consumed 1.354s CPU time.
Dec  3 16:19:00 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:19:00 np0005544708 systemd[1]: var-lib-containers-storage-overlay-7d1882832d911e92556a8534d548c3baa2b31f9dd3150afe8e8f01eeeec73072-merged.mount: Deactivated successfully.
Dec  3 16:19:00 np0005544708 podman[153261]: 2025-12-03 21:19:00.401881238 +0000 UTC m=+1.656016705 container remove 6d746c0d9901f36515318597abd149681402966f5c7a93b5d51f63757e229666 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 16:19:00 np0005544708 systemd[1]: libpod-conmon-6d746c0d9901f36515318597abd149681402966f5c7a93b5d51f63757e229666.scope: Deactivated successfully.
Dec  3 16:19:00 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:19:00 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:19:00 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:19:00 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:19:01 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v373: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:19:01 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:19:01 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:19:03 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v374: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:19:03 np0005544708 python3.9[153681]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:19:04 np0005544708 python3.9[153834]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:19:05 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:19:05 np0005544708 python3.9[153987]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:19:05 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v375: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:19:06 np0005544708 python3.9[154140]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:19:06 np0005544708 podman[154141]: 2025-12-03 21:19:06.190653797 +0000 UTC m=+0.121008573 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3)
Dec  3 16:19:07 np0005544708 python3.9[154320]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:19:07 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v376: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:19:09 np0005544708 python3.9[154473]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:19:09 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v377: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:19:09 np0005544708 python3.9[154626]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:19:09 np0005544708 ceph-osd[86059]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 16:19:09 np0005544708 ceph-osd[86059]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 4373 writes, 20K keys, 4373 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 4373 writes, 451 syncs, 9.70 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4373 writes, 20K keys, 4373 commit groups, 1.0 writes per commit group, ingest: 16.41 MB, 0.03 MB/s#012Interval WAL: 4373 writes, 451 syncs, 9.70 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561448c398d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561448c398d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Dec  3 16:19:10 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:19:11 np0005544708 python3.9[154779]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:19:11 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v378: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:19:11 np0005544708 python3.9[154931]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:19:12 np0005544708 python3.9[155083]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:19:13 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v379: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:19:13 np0005544708 python3.9[155235]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:19:14 np0005544708 python3.9[155387]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:19:14 np0005544708 ceph-osd[87094]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 16:19:14 np0005544708 ceph-osd[87094]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 4515 writes, 20K keys, 4515 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 4515 writes, 505 syncs, 8.94 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4515 writes, 20K keys, 4515 commit groups, 1.0 writes per commit group, ingest: 16.57 MB, 0.03 MB/s#012Interval WAL: 4515 writes, 505 syncs, 8.94 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cf1bed58d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cf1bed58d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Dec  3 16:19:14 np0005544708 python3.9[155539]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:19:15 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:19:15 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v380: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:19:15 np0005544708 python3.9[155691]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:19:16 np0005544708 python3.9[155843]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:19:17 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v381: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:19:17 np0005544708 python3.9[155995]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:19:18 np0005544708 podman[156119]: 2025-12-03 21:19:18.030645336 +0000 UTC m=+0.089095018 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  3 16:19:18 np0005544708 python3.9[156167]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:19:18 np0005544708 python3.9[156320]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:19:19 np0005544708 ceph-osd[88129]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 16:19:19 np0005544708 ceph-osd[88129]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 4150 writes, 19K keys, 4150 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 4150 writes, 366 syncs, 11.34 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4150 writes, 19K keys, 4150 commit groups, 1.0 writes per commit group, ingest: 16.19 MB, 0.03 MB/s#012Interval WAL: 4150 writes, 366 syncs, 11.34 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559f074b98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559f074b98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Dec  3 16:19:19 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v382: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:19:19 np0005544708 python3.9[156472]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:19:20 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:19:20 np0005544708 python3.9[156624]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:19:21 np0005544708 python3.9[156776]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:19:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Optimize plan auto_2025-12-03_21:19:21
Dec  3 16:19:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 16:19:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] do_upmap
Dec  3 16:19:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] pools ['volumes', 'backups', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'vms']
Dec  3 16:19:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] prepared 0/10 upmap changes
Dec  3 16:19:21 np0005544708 ceph-mgr[75500]: [devicehealth INFO root] Check health
Dec  3 16:19:21 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v383: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:19:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:19:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:19:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:19:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:19:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:19:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:19:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 16:19:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 16:19:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:19:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:19:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:19:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:19:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:19:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:19:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:19:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:19:22 np0005544708 python3.9[156928]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:19:23 np0005544708 python3.9[157081]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  3 16:19:23 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v384: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:19:24 np0005544708 python3.9[157233]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  3 16:19:24 np0005544708 systemd[1]: Reloading.
Dec  3 16:19:24 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:19:24 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:19:25 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:19:25 np0005544708 python3.9[157420]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:19:25 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v385: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:19:25 np0005544708 python3.9[157573]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:19:26 np0005544708 python3.9[157726]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:19:27 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v386: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:19:27 np0005544708 python3.9[157879]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:19:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 16:19:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:19:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 16:19:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:19:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:19:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:19:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:19:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:19:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:19:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:19:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:19:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:19:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.7344613959254126e-06 of space, bias 4.0, pg target 0.002081353675110495 quantized to 16 (current 16)
Dec  3 16:19:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:19:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:19:28 np0005544708 python3.9[158032]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:19:28 np0005544708 python3.9[158185]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:19:29 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v387: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:19:29 np0005544708 python3.9[158338]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:19:30 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:19:31 np0005544708 python3.9[158491]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Dec  3 16:19:31 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v388: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:19:31 np0005544708 python3.9[158644]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  3 16:19:33 np0005544708 python3.9[158802]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  3 16:19:33 np0005544708 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 16:19:33 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v389: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:19:34 np0005544708 python3.9[158963]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 16:19:35 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:19:35 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v390: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:19:35 np0005544708 python3.9[159047]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 16:19:37 np0005544708 podman[159051]: 2025-12-03 21:19:37.194979488 +0000 UTC m=+0.125993563 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  3 16:19:37 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v391: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:19:39 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v392: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:19:40 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:19:41 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v393: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:19:43 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v394: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:19:45 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:19:45 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v395: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:19:47 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v396: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:19:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:19:48.923 151937 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:19:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:19:48.926 151937 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:19:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:19:48.926 151937 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:19:49 np0005544708 podman[159221]: 2025-12-03 21:19:49.164766433 +0000 UTC m=+0.084513010 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  3 16:19:49 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v397: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:19:50 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:19:51 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v398: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:19:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:19:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:19:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:19:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:19:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:19:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:19:53 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v399: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:19:55 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:19:55 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v400: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:19:57 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v401: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:19:57 np0005544708 ceph-mon[75204]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Dec  3 16:19:57 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:19:57.592216) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 16:19:57 np0005544708 ceph-mon[75204]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Dec  3 16:19:57 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796797592299, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 1495, "num_deletes": 251, "total_data_size": 1662403, "memory_usage": 1691344, "flush_reason": "Manual Compaction"}
Dec  3 16:19:57 np0005544708 ceph-mon[75204]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Dec  3 16:19:57 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796797604185, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 1620371, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7557, "largest_seqno": 9051, "table_properties": {"data_size": 1613487, "index_size": 4023, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13561, "raw_average_key_size": 18, "raw_value_size": 1599650, "raw_average_value_size": 2240, "num_data_blocks": 189, "num_entries": 714, "num_filter_entries": 714, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764796633, "oldest_key_time": 1764796633, "file_creation_time": 1764796797, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d83d641b-0db7-44b5-9540-349f4c36f664", "db_session_id": "YRQHTOJ9E78VAMDNI6U1", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Dec  3 16:19:57 np0005544708 ceph-mon[75204]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 12008 microseconds, and 4253 cpu microseconds.
Dec  3 16:19:57 np0005544708 ceph-mon[75204]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 16:19:57 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:19:57.604235) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 1620371 bytes OK
Dec  3 16:19:57 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:19:57.604256) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Dec  3 16:19:57 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:19:57.605110) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Dec  3 16:19:57 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:19:57.605124) EVENT_LOG_v1 {"time_micros": 1764796797605120, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 16:19:57 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:19:57.605146) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 16:19:57 np0005544708 ceph-mon[75204]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1655887, prev total WAL file size 1655887, number of live WAL files 2.
Dec  3 16:19:57 np0005544708 ceph-mon[75204]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 16:19:57 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:19:57.605886) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Dec  3 16:19:57 np0005544708 ceph-mon[75204]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 16:19:57 np0005544708 ceph-mon[75204]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(1582KB)], [23(4392KB)]
Dec  3 16:19:57 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796797605944, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 6117825, "oldest_snapshot_seqno": -1}
Dec  3 16:19:57 np0005544708 ceph-mon[75204]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 2803 keys, 4865008 bytes, temperature: kUnknown
Dec  3 16:19:57 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796797654345, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 4865008, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4843590, "index_size": 13309, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7045, "raw_key_size": 65065, "raw_average_key_size": 23, "raw_value_size": 4790654, "raw_average_value_size": 1709, "num_data_blocks": 596, "num_entries": 2803, "num_filter_entries": 2803, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764796079, "oldest_key_time": 0, "file_creation_time": 1764796797, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d83d641b-0db7-44b5-9540-349f4c36f664", "db_session_id": "YRQHTOJ9E78VAMDNI6U1", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Dec  3 16:19:57 np0005544708 ceph-mon[75204]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 16:19:57 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:19:57.654878) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 4865008 bytes
Dec  3 16:19:57 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:19:57.656389) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 125.9 rd, 100.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 4.3 +0.0 blob) out(4.6 +0.0 blob), read-write-amplify(6.8) write-amplify(3.0) OK, records in: 3317, records dropped: 514 output_compression: NoCompression
Dec  3 16:19:57 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:19:57.656420) EVENT_LOG_v1 {"time_micros": 1764796797656405, "job": 8, "event": "compaction_finished", "compaction_time_micros": 48584, "compaction_time_cpu_micros": 22794, "output_level": 6, "num_output_files": 1, "total_output_size": 4865008, "num_input_records": 3317, "num_output_records": 2803, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 16:19:57 np0005544708 ceph-mon[75204]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 16:19:57 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796797657453, "job": 8, "event": "table_file_deletion", "file_number": 25}
Dec  3 16:19:57 np0005544708 ceph-mon[75204]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 16:19:57 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764796797659230, "job": 8, "event": "table_file_deletion", "file_number": 23}
Dec  3 16:19:57 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:19:57.605781) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:19:57 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:19:57.659457) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:19:57 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:19:57.659465) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:19:57 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:19:57.659470) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:19:57 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:19:57.659474) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:19:57 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:19:57.659478) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:19:59 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v402: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:20:00 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:20:01 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v403: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:20:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec  3 16:20:01 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec  3 16:20:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:20:01 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:20:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  3 16:20:01 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:20:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  3 16:20:01 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:20:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  3 16:20:01 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  3 16:20:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  3 16:20:01 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:20:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:20:01 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:20:01 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec  3 16:20:01 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:20:01 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:20:01 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:20:01 np0005544708 podman[159426]: 2025-12-03 21:20:01.918799241 +0000 UTC m=+0.068026129 container create 63a0579851b37bf5208137dda26a18239520ac565dfb402e4a545e6d325d0aa9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_shirley, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  3 16:20:01 np0005544708 systemd[1]: Started libpod-conmon-63a0579851b37bf5208137dda26a18239520ac565dfb402e4a545e6d325d0aa9.scope.
Dec  3 16:20:01 np0005544708 podman[159426]: 2025-12-03 21:20:01.893512755 +0000 UTC m=+0.042739723 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:20:02 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:20:02 np0005544708 podman[159426]: 2025-12-03 21:20:02.045441737 +0000 UTC m=+0.194668715 container init 63a0579851b37bf5208137dda26a18239520ac565dfb402e4a545e6d325d0aa9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_shirley, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:20:02 np0005544708 podman[159426]: 2025-12-03 21:20:02.059436251 +0000 UTC m=+0.208663149 container start 63a0579851b37bf5208137dda26a18239520ac565dfb402e4a545e6d325d0aa9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:20:02 np0005544708 podman[159426]: 2025-12-03 21:20:02.063249804 +0000 UTC m=+0.212476782 container attach 63a0579851b37bf5208137dda26a18239520ac565dfb402e4a545e6d325d0aa9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_shirley, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 16:20:02 np0005544708 interesting_shirley[159442]: 167 167
Dec  3 16:20:02 np0005544708 systemd[1]: libpod-63a0579851b37bf5208137dda26a18239520ac565dfb402e4a545e6d325d0aa9.scope: Deactivated successfully.
Dec  3 16:20:02 np0005544708 podman[159426]: 2025-12-03 21:20:02.072884341 +0000 UTC m=+0.222111229 container died 63a0579851b37bf5208137dda26a18239520ac565dfb402e4a545e6d325d0aa9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 16:20:02 np0005544708 systemd[1]: var-lib-containers-storage-overlay-5076c30a60eb46bd1412f7d9e9d0abf6774ef4080deade970ab78fc2ef7530a9-merged.mount: Deactivated successfully.
Dec  3 16:20:02 np0005544708 podman[159426]: 2025-12-03 21:20:02.131360135 +0000 UTC m=+0.280587053 container remove 63a0579851b37bf5208137dda26a18239520ac565dfb402e4a545e6d325d0aa9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_shirley, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec  3 16:20:02 np0005544708 systemd[1]: libpod-conmon-63a0579851b37bf5208137dda26a18239520ac565dfb402e4a545e6d325d0aa9.scope: Deactivated successfully.
Dec  3 16:20:02 np0005544708 podman[159465]: 2025-12-03 21:20:02.329616006 +0000 UTC m=+0.055333011 container create be88db6e62308f314a85e2baec9c71812221cad5ffd101f6dbe06af3c6dbe890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_euler, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Dec  3 16:20:02 np0005544708 systemd[1]: Started libpod-conmon-be88db6e62308f314a85e2baec9c71812221cad5ffd101f6dbe06af3c6dbe890.scope.
Dec  3 16:20:02 np0005544708 podman[159465]: 2025-12-03 21:20:02.303416976 +0000 UTC m=+0.029133991 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:20:02 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:20:02 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c9038a53f4f5b82a3517ec716b8436d10c1c630d453e4b77bd4af332b992548/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:20:02 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c9038a53f4f5b82a3517ec716b8436d10c1c630d453e4b77bd4af332b992548/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:20:02 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c9038a53f4f5b82a3517ec716b8436d10c1c630d453e4b77bd4af332b992548/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:20:02 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c9038a53f4f5b82a3517ec716b8436d10c1c630d453e4b77bd4af332b992548/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:20:02 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c9038a53f4f5b82a3517ec716b8436d10c1c630d453e4b77bd4af332b992548/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:20:02 np0005544708 podman[159465]: 2025-12-03 21:20:02.448821493 +0000 UTC m=+0.174538548 container init be88db6e62308f314a85e2baec9c71812221cad5ffd101f6dbe06af3c6dbe890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_euler, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:20:02 np0005544708 podman[159465]: 2025-12-03 21:20:02.461593804 +0000 UTC m=+0.187310799 container start be88db6e62308f314a85e2baec9c71812221cad5ffd101f6dbe06af3c6dbe890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  3 16:20:02 np0005544708 podman[159465]: 2025-12-03 21:20:02.465367315 +0000 UTC m=+0.191084380 container attach be88db6e62308f314a85e2baec9c71812221cad5ffd101f6dbe06af3c6dbe890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_euler, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec  3 16:20:03 np0005544708 busy_euler[159481]: --> passed data devices: 0 physical, 3 LVM
Dec  3 16:20:03 np0005544708 busy_euler[159481]: --> All data devices are unavailable
Dec  3 16:20:03 np0005544708 systemd[1]: libpod-be88db6e62308f314a85e2baec9c71812221cad5ffd101f6dbe06af3c6dbe890.scope: Deactivated successfully.
Dec  3 16:20:03 np0005544708 podman[159465]: 2025-12-03 21:20:03.041331015 +0000 UTC m=+0.767048030 container died be88db6e62308f314a85e2baec9c71812221cad5ffd101f6dbe06af3c6dbe890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_euler, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:20:03 np0005544708 systemd[1]: var-lib-containers-storage-overlay-6c9038a53f4f5b82a3517ec716b8436d10c1c630d453e4b77bd4af332b992548-merged.mount: Deactivated successfully.
Dec  3 16:20:03 np0005544708 podman[159465]: 2025-12-03 21:20:03.100023805 +0000 UTC m=+0.825740820 container remove be88db6e62308f314a85e2baec9c71812221cad5ffd101f6dbe06af3c6dbe890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:20:03 np0005544708 systemd[1]: libpod-conmon-be88db6e62308f314a85e2baec9c71812221cad5ffd101f6dbe06af3c6dbe890.scope: Deactivated successfully.
Dec  3 16:20:03 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v404: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:20:03 np0005544708 podman[159576]: 2025-12-03 21:20:03.633613181 +0000 UTC m=+0.073707041 container create d82c8b647b0dc487345142831ae2fc8d8d2ab61c8f3d18454a89b282d5d44c66 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_jones, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:20:03 np0005544708 systemd[1]: Started libpod-conmon-d82c8b647b0dc487345142831ae2fc8d8d2ab61c8f3d18454a89b282d5d44c66.scope.
Dec  3 16:20:03 np0005544708 podman[159576]: 2025-12-03 21:20:03.605396167 +0000 UTC m=+0.045490077 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:20:03 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:20:03 np0005544708 podman[159576]: 2025-12-03 21:20:03.738328462 +0000 UTC m=+0.178422302 container init d82c8b647b0dc487345142831ae2fc8d8d2ab61c8f3d18454a89b282d5d44c66 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_jones, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec  3 16:20:03 np0005544708 podman[159576]: 2025-12-03 21:20:03.750188539 +0000 UTC m=+0.190282399 container start d82c8b647b0dc487345142831ae2fc8d8d2ab61c8f3d18454a89b282d5d44c66 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 16:20:03 np0005544708 podman[159576]: 2025-12-03 21:20:03.754936506 +0000 UTC m=+0.195030366 container attach d82c8b647b0dc487345142831ae2fc8d8d2ab61c8f3d18454a89b282d5d44c66 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:20:03 np0005544708 stupefied_jones[159593]: 167 167
Dec  3 16:20:03 np0005544708 podman[159576]: 2025-12-03 21:20:03.757198456 +0000 UTC m=+0.197292316 container died d82c8b647b0dc487345142831ae2fc8d8d2ab61c8f3d18454a89b282d5d44c66 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:20:03 np0005544708 systemd[1]: libpod-d82c8b647b0dc487345142831ae2fc8d8d2ab61c8f3d18454a89b282d5d44c66.scope: Deactivated successfully.
Dec  3 16:20:03 np0005544708 systemd[1]: var-lib-containers-storage-overlay-872a2c0d673d6a6e1490f2b993ba1be2a663fa916ed156078b2988eb40fd763e-merged.mount: Deactivated successfully.
Dec  3 16:20:03 np0005544708 podman[159576]: 2025-12-03 21:20:03.820814527 +0000 UTC m=+0.260908387 container remove d82c8b647b0dc487345142831ae2fc8d8d2ab61c8f3d18454a89b282d5d44c66 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec  3 16:20:03 np0005544708 systemd[1]: libpod-conmon-d82c8b647b0dc487345142831ae2fc8d8d2ab61c8f3d18454a89b282d5d44c66.scope: Deactivated successfully.
Dec  3 16:20:04 np0005544708 podman[159617]: 2025-12-03 21:20:04.058970734 +0000 UTC m=+0.071731428 container create 609e5506fc27820ad7bdad44f9427bc4b4b60e3c6a4d4124c7ca9a975833cd08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 16:20:04 np0005544708 systemd[1]: Started libpod-conmon-609e5506fc27820ad7bdad44f9427bc4b4b60e3c6a4d4124c7ca9a975833cd08.scope.
Dec  3 16:20:04 np0005544708 podman[159617]: 2025-12-03 21:20:04.029010284 +0000 UTC m=+0.041771038 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:20:04 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:20:04 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd828d11d900fc2f746b6ca4d5d0eda44ea7410e223e2c3fb70061cd52c8c297/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:20:04 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd828d11d900fc2f746b6ca4d5d0eda44ea7410e223e2c3fb70061cd52c8c297/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:20:04 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd828d11d900fc2f746b6ca4d5d0eda44ea7410e223e2c3fb70061cd52c8c297/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:20:04 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd828d11d900fc2f746b6ca4d5d0eda44ea7410e223e2c3fb70061cd52c8c297/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:20:04 np0005544708 podman[159617]: 2025-12-03 21:20:04.232195456 +0000 UTC m=+0.244956170 container init 609e5506fc27820ad7bdad44f9427bc4b4b60e3c6a4d4124c7ca9a975833cd08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec  3 16:20:04 np0005544708 podman[159617]: 2025-12-03 21:20:04.239087601 +0000 UTC m=+0.251848305 container start 609e5506fc27820ad7bdad44f9427bc4b4b60e3c6a4d4124c7ca9a975833cd08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_davinci, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:20:04 np0005544708 podman[159617]: 2025-12-03 21:20:04.242730038 +0000 UTC m=+0.255490742 container attach 609e5506fc27820ad7bdad44f9427bc4b4b60e3c6a4d4124c7ca9a975833cd08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_davinci, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]: {
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:    "0": [
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:        {
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:            "devices": [
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "/dev/loop3"
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:            ],
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:            "lv_name": "ceph_lv0",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:            "lv_size": "21470642176",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d33bc95-baf8-481d-bc78-3b15ffd29872,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:            "lv_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:            "name": "ceph_lv0",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:            "tags": {
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.block_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.cluster_name": "ceph",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.crush_device_class": "",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.encrypted": "0",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.objectstore": "bluestore",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.osd_fsid": "4d33bc95-baf8-481d-bc78-3b15ffd29872",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.osd_id": "0",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.type": "block",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.vdo": "0",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.with_tpm": "0"
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:            },
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:            "type": "block",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:            "vg_name": "ceph_vg0"
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:        }
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:    ],
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:    "1": [
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:        {
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:            "devices": [
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "/dev/loop4"
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:            ],
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:            "lv_name": "ceph_lv1",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:            "lv_size": "21470642176",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c4086f1b-ff53-4e63-8dc0-011238d77976,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:            "lv_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:            "name": "ceph_lv1",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:            "tags": {
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.block_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.cluster_name": "ceph",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.crush_device_class": "",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.encrypted": "0",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.objectstore": "bluestore",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.osd_fsid": "c4086f1b-ff53-4e63-8dc0-011238d77976",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.osd_id": "1",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.type": "block",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.vdo": "0",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.with_tpm": "0"
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:            },
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:            "type": "block",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:            "vg_name": "ceph_vg1"
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:        }
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:    ],
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:    "2": [
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:        {
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:            "devices": [
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "/dev/loop5"
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:            ],
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:            "lv_name": "ceph_lv2",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:            "lv_size": "21470642176",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=abcd6a67-9013-4470-978f-f75da5f33cd4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:            "lv_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:            "name": "ceph_lv2",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:            "tags": {
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.block_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.cluster_name": "ceph",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.crush_device_class": "",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.encrypted": "0",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.objectstore": "bluestore",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.osd_fsid": "abcd6a67-9013-4470-978f-f75da5f33cd4",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.osd_id": "2",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.type": "block",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.vdo": "0",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:                "ceph.with_tpm": "0"
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:            },
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:            "type": "block",
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:            "vg_name": "ceph_vg2"
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:        }
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]:    ]
Dec  3 16:20:04 np0005544708 pensive_davinci[159633]: }
Dec  3 16:20:04 np0005544708 systemd[1]: libpod-609e5506fc27820ad7bdad44f9427bc4b4b60e3c6a4d4124c7ca9a975833cd08.scope: Deactivated successfully.
Dec  3 16:20:04 np0005544708 podman[159617]: 2025-12-03 21:20:04.573259936 +0000 UTC m=+0.586020610 container died 609e5506fc27820ad7bdad44f9427bc4b4b60e3c6a4d4124c7ca9a975833cd08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:20:04 np0005544708 systemd[1]: var-lib-containers-storage-overlay-dd828d11d900fc2f746b6ca4d5d0eda44ea7410e223e2c3fb70061cd52c8c297-merged.mount: Deactivated successfully.
Dec  3 16:20:04 np0005544708 podman[159617]: 2025-12-03 21:20:04.984046699 +0000 UTC m=+0.996807393 container remove 609e5506fc27820ad7bdad44f9427bc4b4b60e3c6a4d4124c7ca9a975833cd08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_davinci, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec  3 16:20:04 np0005544708 systemd[1]: libpod-conmon-609e5506fc27820ad7bdad44f9427bc4b4b60e3c6a4d4124c7ca9a975833cd08.scope: Deactivated successfully.
Dec  3 16:20:05 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:20:05 np0005544708 kernel: SELinux:  Converting 2769 SID table entries...
Dec  3 16:20:05 np0005544708 kernel: SELinux:  policy capability network_peer_controls=1
Dec  3 16:20:05 np0005544708 kernel: SELinux:  policy capability open_perms=1
Dec  3 16:20:05 np0005544708 kernel: SELinux:  policy capability extended_socket_class=1
Dec  3 16:20:05 np0005544708 kernel: SELinux:  policy capability always_check_network=0
Dec  3 16:20:05 np0005544708 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  3 16:20:05 np0005544708 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  3 16:20:05 np0005544708 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  3 16:20:05 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v405: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:20:05 np0005544708 podman[159727]: 2025-12-03 21:20:05.558853958 +0000 UTC m=+0.082840236 container create 8cea69ff0e006d0b6c4a82280279fef7efdba44802493c21bee7990a72b6f0fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_kowalevski, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  3 16:20:05 np0005544708 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Dec  3 16:20:05 np0005544708 podman[159727]: 2025-12-03 21:20:05.516145666 +0000 UTC m=+0.040132004 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:20:05 np0005544708 systemd[1]: Started libpod-conmon-8cea69ff0e006d0b6c4a82280279fef7efdba44802493c21bee7990a72b6f0fa.scope.
Dec  3 16:20:05 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:20:05 np0005544708 podman[159727]: 2025-12-03 21:20:05.680505271 +0000 UTC m=+0.204491599 container init 8cea69ff0e006d0b6c4a82280279fef7efdba44802493c21bee7990a72b6f0fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_kowalevski, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec  3 16:20:05 np0005544708 podman[159727]: 2025-12-03 21:20:05.691147565 +0000 UTC m=+0.215133813 container start 8cea69ff0e006d0b6c4a82280279fef7efdba44802493c21bee7990a72b6f0fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:20:05 np0005544708 podman[159727]: 2025-12-03 21:20:05.695610654 +0000 UTC m=+0.219597002 container attach 8cea69ff0e006d0b6c4a82280279fef7efdba44802493c21bee7990a72b6f0fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_kowalevski, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 16:20:05 np0005544708 youthful_kowalevski[159743]: 167 167
Dec  3 16:20:05 np0005544708 systemd[1]: libpod-8cea69ff0e006d0b6c4a82280279fef7efdba44802493c21bee7990a72b6f0fa.scope: Deactivated successfully.
Dec  3 16:20:05 np0005544708 podman[159727]: 2025-12-03 21:20:05.700066494 +0000 UTC m=+0.224052802 container died 8cea69ff0e006d0b6c4a82280279fef7efdba44802493c21bee7990a72b6f0fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:20:05 np0005544708 systemd[1]: var-lib-containers-storage-overlay-a152a6800551a26a5c491feceb42fb3cf7291ec9d4bf59801e80a0e69c81734f-merged.mount: Deactivated successfully.
Dec  3 16:20:05 np0005544708 podman[159727]: 2025-12-03 21:20:05.751287273 +0000 UTC m=+0.275273531 container remove 8cea69ff0e006d0b6c4a82280279fef7efdba44802493c21bee7990a72b6f0fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_kowalevski, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:20:05 np0005544708 systemd[1]: libpod-conmon-8cea69ff0e006d0b6c4a82280279fef7efdba44802493c21bee7990a72b6f0fa.scope: Deactivated successfully.
Dec  3 16:20:05 np0005544708 podman[159768]: 2025-12-03 21:20:05.955601816 +0000 UTC m=+0.066743256 container create b993f3c2ff8a772e6c9327b715a6bc378d00e0792c03a60a7d43d03ad6d185ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_cannon, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:20:06 np0005544708 systemd[1]: Started libpod-conmon-b993f3c2ff8a772e6c9327b715a6bc378d00e0792c03a60a7d43d03ad6d185ff.scope.
Dec  3 16:20:06 np0005544708 podman[159768]: 2025-12-03 21:20:05.929353944 +0000 UTC m=+0.040495474 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:20:06 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:20:06 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adeffc4b78eeb92b775884d0e54b73ae2566810a0d8d8a814f3de9ee3b46a3b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:20:06 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adeffc4b78eeb92b775884d0e54b73ae2566810a0d8d8a814f3de9ee3b46a3b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:20:06 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adeffc4b78eeb92b775884d0e54b73ae2566810a0d8d8a814f3de9ee3b46a3b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:20:06 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adeffc4b78eeb92b775884d0e54b73ae2566810a0d8d8a814f3de9ee3b46a3b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:20:06 np0005544708 podman[159768]: 2025-12-03 21:20:06.059171225 +0000 UTC m=+0.170312705 container init b993f3c2ff8a772e6c9327b715a6bc378d00e0792c03a60a7d43d03ad6d185ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec  3 16:20:06 np0005544708 podman[159768]: 2025-12-03 21:20:06.073945441 +0000 UTC m=+0.185086901 container start b993f3c2ff8a772e6c9327b715a6bc378d00e0792c03a60a7d43d03ad6d185ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_cannon, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec  3 16:20:06 np0005544708 podman[159768]: 2025-12-03 21:20:06.078390809 +0000 UTC m=+0.189532289 container attach b993f3c2ff8a772e6c9327b715a6bc378d00e0792c03a60a7d43d03ad6d185ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  3 16:20:06 np0005544708 lvm[159862]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:20:06 np0005544708 lvm[159865]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:20:06 np0005544708 lvm[159866]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:20:06 np0005544708 lvm[159866]: VG ceph_vg2 finished
Dec  3 16:20:06 np0005544708 lvm[159862]: VG ceph_vg0 finished
Dec  3 16:20:06 np0005544708 lvm[159865]: VG ceph_vg1 finished
Dec  3 16:20:06 np0005544708 youthful_cannon[159784]: {}
Dec  3 16:20:06 np0005544708 systemd[1]: libpod-b993f3c2ff8a772e6c9327b715a6bc378d00e0792c03a60a7d43d03ad6d185ff.scope: Deactivated successfully.
Dec  3 16:20:06 np0005544708 systemd[1]: libpod-b993f3c2ff8a772e6c9327b715a6bc378d00e0792c03a60a7d43d03ad6d185ff.scope: Consumed 1.285s CPU time.
Dec  3 16:20:06 np0005544708 podman[159768]: 2025-12-03 21:20:06.89599732 +0000 UTC m=+1.007138780 container died b993f3c2ff8a772e6c9327b715a6bc378d00e0792c03a60a7d43d03ad6d185ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:20:06 np0005544708 systemd[1]: var-lib-containers-storage-overlay-adeffc4b78eeb92b775884d0e54b73ae2566810a0d8d8a814f3de9ee3b46a3b6-merged.mount: Deactivated successfully.
Dec  3 16:20:06 np0005544708 podman[159768]: 2025-12-03 21:20:06.954592407 +0000 UTC m=+1.065733847 container remove b993f3c2ff8a772e6c9327b715a6bc378d00e0792c03a60a7d43d03ad6d185ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:20:06 np0005544708 systemd[1]: libpod-conmon-b993f3c2ff8a772e6c9327b715a6bc378d00e0792c03a60a7d43d03ad6d185ff.scope: Deactivated successfully.
Dec  3 16:20:07 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:20:07 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:20:07 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:20:07 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:20:07 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v406: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:20:08 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:20:08 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:20:08 np0005544708 podman[159908]: 2025-12-03 21:20:08.169795679 +0000 UTC m=+0.106797596 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  3 16:20:09 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v407: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:20:10 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:20:11 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v408: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:20:13 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v409: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:20:15 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:20:15 np0005544708 kernel: SELinux:  Converting 2769 SID table entries...
Dec  3 16:20:15 np0005544708 kernel: SELinux:  policy capability network_peer_controls=1
Dec  3 16:20:15 np0005544708 kernel: SELinux:  policy capability open_perms=1
Dec  3 16:20:15 np0005544708 kernel: SELinux:  policy capability extended_socket_class=1
Dec  3 16:20:15 np0005544708 kernel: SELinux:  policy capability always_check_network=0
Dec  3 16:20:15 np0005544708 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  3 16:20:15 np0005544708 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  3 16:20:15 np0005544708 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  3 16:20:15 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v410: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:20:17 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v411: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:20:19 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v412: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:20:20 np0005544708 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Dec  3 16:20:20 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:20:20 np0005544708 podman[159943]: 2025-12-03 21:20:20.173212853 +0000 UTC m=+0.093954593 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 16:20:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Optimize plan auto_2025-12-03_21:20:21
Dec  3 16:20:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 16:20:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] do_upmap
Dec  3 16:20:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', '.mgr', 'images', 'cephfs.cephfs.meta', 'volumes', 'vms']
Dec  3 16:20:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] prepared 0/10 upmap changes
Dec  3 16:20:21 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v413: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:20:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:20:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:20:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:20:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:20:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:20:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:20:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 16:20:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:20:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 16:20:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:20:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:20:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:20:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:20:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:20:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:20:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:20:23 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v414: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:20:25 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:20:25 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v415: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:20:27 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v416: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:20:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 16:20:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:20:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 16:20:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:20:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:20:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:20:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:20:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:20:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:20:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:20:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:20:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:20:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.7344613959254126e-06 of space, bias 4.0, pg target 0.002081353675110495 quantized to 16 (current 16)
Dec  3 16:20:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:20:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:20:29 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v417: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:20:30 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:20:31 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v418: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:20:33 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v419: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:20:35 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:20:35 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v420: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:20:37 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v421: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:20:39 np0005544708 podman[165653]: 2025-12-03 21:20:39.17427111 +0000 UTC m=+0.113998159 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 16:20:39 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v422: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:20:40 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:20:41 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v423: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:20:43 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v424: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:20:45 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:20:45 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v425: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:20:47 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v426: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:20:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:20:48.924 151937 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:20:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:20:48.924 151937 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:20:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:20:48.924 151937 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:20:49 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v427: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:20:50 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:20:51 np0005544708 podman[171012]: 2025-12-03 21:20:51.139199394 +0000 UTC m=+0.070190687 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 16:20:51 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v428: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:20:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:20:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:20:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:20:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:20:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:20:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:20:53 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v429: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:20:55 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:20:55 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v430: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:20:57 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v431: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:20:59 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v432: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:21:00 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:21:01 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v433: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:21:03 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v434: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:21:05 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:21:05 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v435: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:21:07 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v436: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:21:07 np0005544708 podman[176911]: 2025-12-03 21:21:07.86965549 +0000 UTC m=+0.089929031 container exec 5be1cf87f4450717250955c63021ef41dba8c1099adb634f3e1f6a0b130fa916 (image=quay.io/ceph/ceph:v20, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:21:07 np0005544708 podman[176911]: 2025-12-03 21:21:07.996523671 +0000 UTC m=+0.216797242 container exec_died 5be1cf87f4450717250955c63021ef41dba8c1099adb634f3e1f6a0b130fa916 (image=quay.io/ceph/ceph:v20, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mon-compute-0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec  3 16:21:08 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:21:08 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:21:08 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:21:08 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:21:09 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v437: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:21:09 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:21:09 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:21:09 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  3 16:21:09 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:21:09 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  3 16:21:09 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:21:09 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  3 16:21:09 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  3 16:21:09 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  3 16:21:09 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:21:09 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:21:09 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:21:09 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:21:09 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:21:09 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:21:09 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:21:09 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:21:09 np0005544708 podman[177179]: 2025-12-03 21:21:09.872007314 +0000 UTC m=+0.124209220 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:21:10 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:21:10 np0005544708 podman[177242]: 2025-12-03 21:21:10.166499372 +0000 UTC m=+0.064673225 container create 99ae0296353e733a3610ed2a590a4bf1e3583613aa8974726f03a3912bf0e2b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec  3 16:21:10 np0005544708 systemd[1]: Started libpod-conmon-99ae0296353e733a3610ed2a590a4bf1e3583613aa8974726f03a3912bf0e2b3.scope.
Dec  3 16:21:10 np0005544708 podman[177242]: 2025-12-03 21:21:10.137769723 +0000 UTC m=+0.035943616 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:21:10 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:21:10 np0005544708 podman[177242]: 2025-12-03 21:21:10.27590788 +0000 UTC m=+0.174081793 container init 99ae0296353e733a3610ed2a590a4bf1e3583613aa8974726f03a3912bf0e2b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_ptolemy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle)
Dec  3 16:21:10 np0005544708 podman[177242]: 2025-12-03 21:21:10.289019596 +0000 UTC m=+0.187193429 container start 99ae0296353e733a3610ed2a590a4bf1e3583613aa8974726f03a3912bf0e2b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_ptolemy, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:21:10 np0005544708 podman[177242]: 2025-12-03 21:21:10.293140688 +0000 UTC m=+0.191314591 container attach 99ae0296353e733a3610ed2a590a4bf1e3583613aa8974726f03a3912bf0e2b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_ptolemy, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  3 16:21:10 np0005544708 practical_ptolemy[177258]: 167 167
Dec  3 16:21:10 np0005544708 systemd[1]: libpod-99ae0296353e733a3610ed2a590a4bf1e3583613aa8974726f03a3912bf0e2b3.scope: Deactivated successfully.
Dec  3 16:21:10 np0005544708 podman[177242]: 2025-12-03 21:21:10.298959616 +0000 UTC m=+0.197133479 container died 99ae0296353e733a3610ed2a590a4bf1e3583613aa8974726f03a3912bf0e2b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:21:10 np0005544708 systemd[1]: var-lib-containers-storage-overlay-46d857fd9e8223772005d101da5a22ed3d646476fc16c27cfe7102e0b792673f-merged.mount: Deactivated successfully.
Dec  3 16:21:10 np0005544708 podman[177242]: 2025-12-03 21:21:10.355385756 +0000 UTC m=+0.253559609 container remove 99ae0296353e733a3610ed2a590a4bf1e3583613aa8974726f03a3912bf0e2b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_ptolemy, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 16:21:10 np0005544708 systemd[1]: libpod-conmon-99ae0296353e733a3610ed2a590a4bf1e3583613aa8974726f03a3912bf0e2b3.scope: Deactivated successfully.
Dec  3 16:21:10 np0005544708 podman[177282]: 2025-12-03 21:21:10.536498218 +0000 UTC m=+0.039648146 container create 24ac335ea57eb37bed316a0080299568390d88c8b8f613a515057509f0254e7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  3 16:21:10 np0005544708 systemd[1]: Started libpod-conmon-24ac335ea57eb37bed316a0080299568390d88c8b8f613a515057509f0254e7a.scope.
Dec  3 16:21:10 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:21:10 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adb37f58d73bb96add7ef54cf22cbe5047a6f4d565aa7071704a663dc96c4fbb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:21:10 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adb37f58d73bb96add7ef54cf22cbe5047a6f4d565aa7071704a663dc96c4fbb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:21:10 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adb37f58d73bb96add7ef54cf22cbe5047a6f4d565aa7071704a663dc96c4fbb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:21:10 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adb37f58d73bb96add7ef54cf22cbe5047a6f4d565aa7071704a663dc96c4fbb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:21:10 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adb37f58d73bb96add7ef54cf22cbe5047a6f4d565aa7071704a663dc96c4fbb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:21:10 np0005544708 podman[177282]: 2025-12-03 21:21:10.519140098 +0000 UTC m=+0.022290056 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:21:10 np0005544708 podman[177282]: 2025-12-03 21:21:10.621442393 +0000 UTC m=+0.124592381 container init 24ac335ea57eb37bed316a0080299568390d88c8b8f613a515057509f0254e7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_lewin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:21:10 np0005544708 podman[177282]: 2025-12-03 21:21:10.626552542 +0000 UTC m=+0.129702510 container start 24ac335ea57eb37bed316a0080299568390d88c8b8f613a515057509f0254e7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_lewin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  3 16:21:10 np0005544708 podman[177282]: 2025-12-03 21:21:10.630744765 +0000 UTC m=+0.133894733 container attach 24ac335ea57eb37bed316a0080299568390d88c8b8f613a515057509f0254e7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  3 16:21:11 np0005544708 fervent_lewin[177298]: --> passed data devices: 0 physical, 3 LVM
Dec  3 16:21:11 np0005544708 fervent_lewin[177298]: --> All data devices are unavailable
Dec  3 16:21:11 np0005544708 systemd[1]: libpod-24ac335ea57eb37bed316a0080299568390d88c8b8f613a515057509f0254e7a.scope: Deactivated successfully.
Dec  3 16:21:11 np0005544708 podman[177282]: 2025-12-03 21:21:11.208034705 +0000 UTC m=+0.711184703 container died 24ac335ea57eb37bed316a0080299568390d88c8b8f613a515057509f0254e7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  3 16:21:11 np0005544708 systemd[1]: var-lib-containers-storage-overlay-adb37f58d73bb96add7ef54cf22cbe5047a6f4d565aa7071704a663dc96c4fbb-merged.mount: Deactivated successfully.
Dec  3 16:21:11 np0005544708 podman[177282]: 2025-12-03 21:21:11.278844845 +0000 UTC m=+0.781994813 container remove 24ac335ea57eb37bed316a0080299568390d88c8b8f613a515057509f0254e7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 16:21:11 np0005544708 systemd[1]: libpod-conmon-24ac335ea57eb37bed316a0080299568390d88c8b8f613a515057509f0254e7a.scope: Deactivated successfully.
Dec  3 16:21:11 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v438: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:21:11 np0005544708 podman[177394]: 2025-12-03 21:21:11.879792447 +0000 UTC m=+0.070485533 container create db63a9debb38af9bd72a0f7f64c1d2820d052d40487fd0cbc5819836ee80cc52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:21:11 np0005544708 systemd[1]: Started libpod-conmon-db63a9debb38af9bd72a0f7f64c1d2820d052d40487fd0cbc5819836ee80cc52.scope.
Dec  3 16:21:11 np0005544708 podman[177394]: 2025-12-03 21:21:11.851218802 +0000 UTC m=+0.041911708 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:21:11 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:21:11 np0005544708 podman[177394]: 2025-12-03 21:21:11.979007687 +0000 UTC m=+0.169700543 container init db63a9debb38af9bd72a0f7f64c1d2820d052d40487fd0cbc5819836ee80cc52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_murdock, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:21:11 np0005544708 podman[177394]: 2025-12-03 21:21:11.988356582 +0000 UTC m=+0.179049408 container start db63a9debb38af9bd72a0f7f64c1d2820d052d40487fd0cbc5819836ee80cc52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_murdock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec  3 16:21:11 np0005544708 podman[177394]: 2025-12-03 21:21:11.993032638 +0000 UTC m=+0.183725464 container attach db63a9debb38af9bd72a0f7f64c1d2820d052d40487fd0cbc5819836ee80cc52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_murdock, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec  3 16:21:11 np0005544708 wizardly_murdock[177410]: 167 167
Dec  3 16:21:11 np0005544708 systemd[1]: libpod-db63a9debb38af9bd72a0f7f64c1d2820d052d40487fd0cbc5819836ee80cc52.scope: Deactivated successfully.
Dec  3 16:21:11 np0005544708 podman[177394]: 2025-12-03 21:21:11.995752842 +0000 UTC m=+0.186445728 container died db63a9debb38af9bd72a0f7f64c1d2820d052d40487fd0cbc5819836ee80cc52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_murdock, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:21:12 np0005544708 systemd[1]: var-lib-containers-storage-overlay-44fbb91765d700a77d987c0e23575b79a32e68b4bdb865dbf2b470630cb2c590-merged.mount: Deactivated successfully.
Dec  3 16:21:12 np0005544708 podman[177394]: 2025-12-03 21:21:12.046935141 +0000 UTC m=+0.237627957 container remove db63a9debb38af9bd72a0f7f64c1d2820d052d40487fd0cbc5819836ee80cc52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_murdock, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec  3 16:21:12 np0005544708 systemd[1]: libpod-conmon-db63a9debb38af9bd72a0f7f64c1d2820d052d40487fd0cbc5819836ee80cc52.scope: Deactivated successfully.
Dec  3 16:21:12 np0005544708 podman[177434]: 2025-12-03 21:21:12.308138116 +0000 UTC m=+0.060873493 container create 6dd71f86c1c70b5311eda3e7bd9c02607bb5dd6ccbc579920f336702c3f49112 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:21:12 np0005544708 systemd[1]: Started libpod-conmon-6dd71f86c1c70b5311eda3e7bd9c02607bb5dd6ccbc579920f336702c3f49112.scope.
Dec  3 16:21:12 np0005544708 podman[177434]: 2025-12-03 21:21:12.285016089 +0000 UTC m=+0.037751436 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:21:12 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:21:12 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a671ede9176e64498f1143b04ca399048fead870ce16516053f73f051ed659fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:21:12 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a671ede9176e64498f1143b04ca399048fead870ce16516053f73f051ed659fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:21:12 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a671ede9176e64498f1143b04ca399048fead870ce16516053f73f051ed659fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:21:12 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a671ede9176e64498f1143b04ca399048fead870ce16516053f73f051ed659fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:21:12 np0005544708 podman[177434]: 2025-12-03 21:21:12.421441839 +0000 UTC m=+0.174177276 container init 6dd71f86c1c70b5311eda3e7bd9c02607bb5dd6ccbc579920f336702c3f49112 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_wu, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  3 16:21:12 np0005544708 podman[177434]: 2025-12-03 21:21:12.431295997 +0000 UTC m=+0.184031374 container start 6dd71f86c1c70b5311eda3e7bd9c02607bb5dd6ccbc579920f336702c3f49112 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_wu, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  3 16:21:12 np0005544708 podman[177434]: 2025-12-03 21:21:12.437217087 +0000 UTC m=+0.189952464 container attach 6dd71f86c1c70b5311eda3e7bd9c02607bb5dd6ccbc579920f336702c3f49112 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_wu, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:21:12 np0005544708 crazy_wu[177451]: {
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:    "0": [
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:        {
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:            "devices": [
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "/dev/loop3"
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:            ],
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:            "lv_name": "ceph_lv0",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:            "lv_size": "21470642176",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d33bc95-baf8-481d-bc78-3b15ffd29872,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:            "lv_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:            "name": "ceph_lv0",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:            "tags": {
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.block_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.cluster_name": "ceph",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.crush_device_class": "",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.encrypted": "0",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.objectstore": "bluestore",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.osd_fsid": "4d33bc95-baf8-481d-bc78-3b15ffd29872",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.osd_id": "0",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.type": "block",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.vdo": "0",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.with_tpm": "0"
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:            },
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:            "type": "block",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:            "vg_name": "ceph_vg0"
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:        }
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:    ],
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:    "1": [
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:        {
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:            "devices": [
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "/dev/loop4"
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:            ],
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:            "lv_name": "ceph_lv1",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:            "lv_size": "21470642176",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c4086f1b-ff53-4e63-8dc0-011238d77976,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:            "lv_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:            "name": "ceph_lv1",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:            "tags": {
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.block_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.cluster_name": "ceph",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.crush_device_class": "",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.encrypted": "0",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.objectstore": "bluestore",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.osd_fsid": "c4086f1b-ff53-4e63-8dc0-011238d77976",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.osd_id": "1",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.type": "block",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.vdo": "0",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.with_tpm": "0"
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:            },
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:            "type": "block",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:            "vg_name": "ceph_vg1"
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:        }
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:    ],
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:    "2": [
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:        {
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:            "devices": [
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "/dev/loop5"
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:            ],
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:            "lv_name": "ceph_lv2",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:            "lv_size": "21470642176",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=abcd6a67-9013-4470-978f-f75da5f33cd4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:            "lv_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:            "name": "ceph_lv2",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:            "tags": {
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.block_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.cluster_name": "ceph",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.crush_device_class": "",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.encrypted": "0",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.objectstore": "bluestore",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.osd_fsid": "abcd6a67-9013-4470-978f-f75da5f33cd4",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.osd_id": "2",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.type": "block",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.vdo": "0",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:                "ceph.with_tpm": "0"
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:            },
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:            "type": "block",
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:            "vg_name": "ceph_vg2"
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:        }
Dec  3 16:21:12 np0005544708 crazy_wu[177451]:    ]
Dec  3 16:21:12 np0005544708 crazy_wu[177451]: }
Dec  3 16:21:12 np0005544708 systemd[1]: libpod-6dd71f86c1c70b5311eda3e7bd9c02607bb5dd6ccbc579920f336702c3f49112.scope: Deactivated successfully.
Dec  3 16:21:12 np0005544708 podman[177434]: 2025-12-03 21:21:12.761826262 +0000 UTC m=+0.514561659 container died 6dd71f86c1c70b5311eda3e7bd9c02607bb5dd6ccbc579920f336702c3f49112 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:21:12 np0005544708 systemd[1]: var-lib-containers-storage-overlay-a671ede9176e64498f1143b04ca399048fead870ce16516053f73f051ed659fa-merged.mount: Deactivated successfully.
Dec  3 16:21:12 np0005544708 podman[177434]: 2025-12-03 21:21:12.82183554 +0000 UTC m=+0.574570917 container remove 6dd71f86c1c70b5311eda3e7bd9c02607bb5dd6ccbc579920f336702c3f49112 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_wu, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:21:12 np0005544708 systemd[1]: libpod-conmon-6dd71f86c1c70b5311eda3e7bd9c02607bb5dd6ccbc579920f336702c3f49112.scope: Deactivated successfully.
Dec  3 16:21:13 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v439: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:21:13 np0005544708 podman[177535]: 2025-12-03 21:21:13.436981416 +0000 UTC m=+0.073547936 container create 30c50648a4cc6b85017e1817d64849e5c055fba4f38cd7db78e14f02c4561c5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_chebyshev, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec  3 16:21:13 np0005544708 systemd[1]: Started libpod-conmon-30c50648a4cc6b85017e1817d64849e5c055fba4f38cd7db78e14f02c4561c5c.scope.
Dec  3 16:21:13 np0005544708 podman[177535]: 2025-12-03 21:21:13.406214692 +0000 UTC m=+0.042781262 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:21:13 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:21:13 np0005544708 podman[177535]: 2025-12-03 21:21:13.557985508 +0000 UTC m=+0.194552038 container init 30c50648a4cc6b85017e1817d64849e5c055fba4f38cd7db78e14f02c4561c5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_chebyshev, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:21:13 np0005544708 podman[177535]: 2025-12-03 21:21:13.569079519 +0000 UTC m=+0.205646049 container start 30c50648a4cc6b85017e1817d64849e5c055fba4f38cd7db78e14f02c4561c5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_chebyshev, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec  3 16:21:13 np0005544708 podman[177535]: 2025-12-03 21:21:13.573010596 +0000 UTC m=+0.209577126 container attach 30c50648a4cc6b85017e1817d64849e5c055fba4f38cd7db78e14f02c4561c5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_chebyshev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:21:13 np0005544708 loving_chebyshev[177552]: 167 167
Dec  3 16:21:13 np0005544708 systemd[1]: libpod-30c50648a4cc6b85017e1817d64849e5c055fba4f38cd7db78e14f02c4561c5c.scope: Deactivated successfully.
Dec  3 16:21:13 np0005544708 podman[177535]: 2025-12-03 21:21:13.577665932 +0000 UTC m=+0.214232462 container died 30c50648a4cc6b85017e1817d64849e5c055fba4f38cd7db78e14f02c4561c5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec  3 16:21:13 np0005544708 systemd[1]: var-lib-containers-storage-overlay-740d42e8f25e2c600de5e718fbe21f75d7e5c227a40081bc255eb2bd6cec3ceb-merged.mount: Deactivated successfully.
Dec  3 16:21:13 np0005544708 podman[177535]: 2025-12-03 21:21:13.625728986 +0000 UTC m=+0.262295516 container remove 30c50648a4cc6b85017e1817d64849e5c055fba4f38cd7db78e14f02c4561c5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:21:13 np0005544708 systemd[1]: libpod-conmon-30c50648a4cc6b85017e1817d64849e5c055fba4f38cd7db78e14f02c4561c5c.scope: Deactivated successfully.
Dec  3 16:21:13 np0005544708 podman[177577]: 2025-12-03 21:21:13.880805965 +0000 UTC m=+0.067654696 container create 3c2163286f15650ce65b35d54b957b9ec4dbc3c5fd5d4b159b772433299a90b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hodgkin, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True)
Dec  3 16:21:13 np0005544708 systemd[1]: Started libpod-conmon-3c2163286f15650ce65b35d54b957b9ec4dbc3c5fd5d4b159b772433299a90b3.scope.
Dec  3 16:21:13 np0005544708 podman[177577]: 2025-12-03 21:21:13.856821015 +0000 UTC m=+0.043669796 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:21:13 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:21:13 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c104b44d8859697a66f0b5a2b5c538138a2394f92183fcc81cb2460ec602170c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:21:13 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c104b44d8859697a66f0b5a2b5c538138a2394f92183fcc81cb2460ec602170c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:21:13 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c104b44d8859697a66f0b5a2b5c538138a2394f92183fcc81cb2460ec602170c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:21:13 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c104b44d8859697a66f0b5a2b5c538138a2394f92183fcc81cb2460ec602170c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:21:13 np0005544708 podman[177577]: 2025-12-03 21:21:13.995011263 +0000 UTC m=+0.181860034 container init 3c2163286f15650ce65b35d54b957b9ec4dbc3c5fd5d4b159b772433299a90b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hodgkin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:21:14 np0005544708 podman[177577]: 2025-12-03 21:21:14.011366137 +0000 UTC m=+0.198214898 container start 3c2163286f15650ce65b35d54b957b9ec4dbc3c5fd5d4b159b772433299a90b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hodgkin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:21:14 np0005544708 podman[177577]: 2025-12-03 21:21:14.015849259 +0000 UTC m=+0.202698020 container attach 3c2163286f15650ce65b35d54b957b9ec4dbc3c5fd5d4b159b772433299a90b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:21:14 np0005544708 lvm[177679]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:21:14 np0005544708 lvm[177679]: VG ceph_vg2 finished
Dec  3 16:21:14 np0005544708 lvm[177677]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:21:14 np0005544708 lvm[177677]: VG ceph_vg1 finished
Dec  3 16:21:14 np0005544708 lvm[177676]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:21:14 np0005544708 lvm[177676]: VG ceph_vg0 finished
Dec  3 16:21:14 np0005544708 charming_hodgkin[177594]: {}
Dec  3 16:21:14 np0005544708 systemd[1]: libpod-3c2163286f15650ce65b35d54b957b9ec4dbc3c5fd5d4b159b772433299a90b3.scope: Deactivated successfully.
Dec  3 16:21:14 np0005544708 systemd[1]: libpod-3c2163286f15650ce65b35d54b957b9ec4dbc3c5fd5d4b159b772433299a90b3.scope: Consumed 1.373s CPU time.
Dec  3 16:21:14 np0005544708 podman[177577]: 2025-12-03 21:21:14.852302747 +0000 UTC m=+1.039151508 container died 3c2163286f15650ce65b35d54b957b9ec4dbc3c5fd5d4b159b772433299a90b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec  3 16:21:15 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v440: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:21:15 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:21:16 np0005544708 systemd[1]: var-lib-containers-storage-overlay-c104b44d8859697a66f0b5a2b5c538138a2394f92183fcc81cb2460ec602170c-merged.mount: Deactivated successfully.
Dec  3 16:21:16 np0005544708 podman[177577]: 2025-12-03 21:21:16.269347646 +0000 UTC m=+2.456196407 container remove 3c2163286f15650ce65b35d54b957b9ec4dbc3c5fd5d4b159b772433299a90b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_hodgkin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 16:21:16 np0005544708 systemd[1]: libpod-conmon-3c2163286f15650ce65b35d54b957b9ec4dbc3c5fd5d4b159b772433299a90b3.scope: Deactivated successfully.
Dec  3 16:21:16 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:21:16 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:21:16 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:21:16 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:21:16 np0005544708 kernel: SELinux:  Converting 2770 SID table entries...
Dec  3 16:21:16 np0005544708 kernel: SELinux:  policy capability network_peer_controls=1
Dec  3 16:21:16 np0005544708 kernel: SELinux:  policy capability open_perms=1
Dec  3 16:21:16 np0005544708 kernel: SELinux:  policy capability extended_socket_class=1
Dec  3 16:21:16 np0005544708 kernel: SELinux:  policy capability always_check_network=0
Dec  3 16:21:16 np0005544708 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  3 16:21:16 np0005544708 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  3 16:21:16 np0005544708 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  3 16:21:16 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:21:16 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:21:17 np0005544708 dbus-broker-launch[743]: Noticed file-system modification, trigger reload.
Dec  3 16:21:17 np0005544708 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Dec  3 16:21:17 np0005544708 dbus-broker-launch[743]: Noticed file-system modification, trigger reload.
Dec  3 16:21:17 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v441: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:21:19 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v442: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:21:20 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:21:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Optimize plan auto_2025-12-03_21:21:21
Dec  3 16:21:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 16:21:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] do_upmap
Dec  3 16:21:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'images', 'vms', '.mgr', 'cephfs.cephfs.meta', 'backups']
Dec  3 16:21:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] prepared 0/10 upmap changes
Dec  3 16:21:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:21:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:21:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:21:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:21:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:21:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:21:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 16:21:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:21:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 16:21:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:21:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:21:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:21:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:21:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:21:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:21:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:21:21 np0005544708 podman[177789]: 2025-12-03 21:21:21.869074581 +0000 UTC m=+0.088444810 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  3 16:21:21 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v443: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:21:23 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v444: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:21:25 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:21:25 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v445: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:21:26 np0005544708 systemd[1]: Stopping OpenSSH server daemon...
Dec  3 16:21:26 np0005544708 systemd[1]: sshd.service: Deactivated successfully.
Dec  3 16:21:26 np0005544708 systemd[1]: Stopped OpenSSH server daemon.
Dec  3 16:21:26 np0005544708 systemd[1]: sshd.service: Consumed 3.464s CPU time, read 32.0K from disk, written 20.0K to disk.
Dec  3 16:21:26 np0005544708 systemd[1]: Stopped target sshd-keygen.target.
Dec  3 16:21:26 np0005544708 systemd[1]: Stopping sshd-keygen.target...
Dec  3 16:21:26 np0005544708 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  3 16:21:26 np0005544708 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  3 16:21:26 np0005544708 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  3 16:21:26 np0005544708 systemd[1]: Reached target sshd-keygen.target.
Dec  3 16:21:26 np0005544708 systemd[1]: Starting OpenSSH server daemon...
Dec  3 16:21:26 np0005544708 systemd[1]: Started OpenSSH server daemon.
Dec  3 16:21:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 16:21:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:21:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 16:21:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:21:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:21:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:21:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:21:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:21:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:21:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:21:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:21:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:21:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.7344613959254126e-06 of space, bias 4.0, pg target 0.002081353675110495 quantized to 16 (current 16)
Dec  3 16:21:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:21:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:21:27 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v446: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:21:29 np0005544708 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  3 16:21:29 np0005544708 systemd[1]: Starting man-db-cache-update.service...
Dec  3 16:21:29 np0005544708 systemd[1]: Reloading.
Dec  3 16:21:29 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:21:29 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:21:29 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v447: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:21:30 np0005544708 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  3 16:21:30 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:21:31 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v448: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:21:33 np0005544708 python3.9[182408]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  3 16:21:33 np0005544708 systemd[1]: Reloading.
Dec  3 16:21:33 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:21:33 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:21:33 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v449: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:21:35 np0005544708 python3.9[183655]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  3 16:21:35 np0005544708 systemd[1]: Reloading.
Dec  3 16:21:35 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:21:35 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:21:35 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:21:35 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v450: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:21:36 np0005544708 python3.9[184938]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  3 16:21:36 np0005544708 systemd[1]: Reloading.
Dec  3 16:21:36 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:21:36 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:21:37 np0005544708 python3.9[186127]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  3 16:21:37 np0005544708 systemd[1]: Reloading.
Dec  3 16:21:37 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:21:37 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:21:37 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v451: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:21:38 np0005544708 python3.9[187366]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 16:21:39 np0005544708 systemd[1]: Reloading.
Dec  3 16:21:39 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:21:39 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:21:39 np0005544708 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  3 16:21:39 np0005544708 systemd[1]: Finished man-db-cache-update.service.
Dec  3 16:21:39 np0005544708 systemd[1]: man-db-cache-update.service: Consumed 12.999s CPU time.
Dec  3 16:21:39 np0005544708 systemd[1]: run-rd9af70c2e38147769a682765fb265333.service: Deactivated successfully.
Dec  3 16:21:39 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v452: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:21:40 np0005544708 podman[188341]: 2025-12-03 21:21:40.073000812 +0000 UTC m=+0.128002104 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  3 16:21:40 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:21:40 np0005544708 python3.9[188342]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 16:21:40 np0005544708 systemd[1]: Reloading.
Dec  3 16:21:40 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:21:40 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:21:41 np0005544708 python3.9[188557]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 16:21:41 np0005544708 systemd[1]: Reloading.
Dec  3 16:21:41 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:21:41 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:21:41 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v453: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:21:43 np0005544708 python3.9[188747]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 16:21:43 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v454: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:21:43 np0005544708 python3.9[188902]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 16:21:44 np0005544708 systemd[1]: Reloading.
Dec  3 16:21:44 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:21:44 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:21:45 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:21:45 np0005544708 python3.9[189093]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  3 16:21:45 np0005544708 systemd[1]: Reloading.
Dec  3 16:21:45 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:21:45 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:21:45 np0005544708 systemd[1]: Listening on libvirt proxy daemon socket.
Dec  3 16:21:45 np0005544708 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Dec  3 16:21:45 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v455: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:21:46 np0005544708 python3.9[189287]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 16:21:47 np0005544708 python3.9[189442]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 16:21:47 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v456: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:21:48 np0005544708 python3.9[189597]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 16:21:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:21:48.925 151937 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:21:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:21:48.927 151937 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:21:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:21:48.928 151937 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:21:49 np0005544708 python3.9[189752]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 16:21:49 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v457: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:21:50 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:21:50 np0005544708 python3.9[189907]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 16:21:51 np0005544708 python3.9[190062]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 16:21:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:21:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:21:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:21:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:21:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:21:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:21:51 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v458: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:21:52 np0005544708 podman[190142]: 2025-12-03 21:21:52.165864036 +0000 UTC m=+0.087880716 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Dec  3 16:21:52 np0005544708 python3.9[190237]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 16:21:53 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v459: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:21:54 np0005544708 python3.9[190392]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 16:21:55 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:21:55 np0005544708 python3.9[190547]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 16:21:55 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v460: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:21:56 np0005544708 python3.9[190702]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 16:21:57 np0005544708 python3.9[190857]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 16:21:57 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v461: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:21:59 np0005544708 python3.9[191012]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 16:21:59 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v462: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:22:00 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:22:01 np0005544708 python3.9[191167]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 16:22:01 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v463: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:22:02 np0005544708 python3.9[191322]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  3 16:22:03 np0005544708 python3.9[191477]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:22:03 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v464: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:22:04 np0005544708 python3.9[191629]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:22:05 np0005544708 python3.9[191781]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:22:05 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:22:05 np0005544708 python3.9[191933]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:22:05 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v465: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:22:06 np0005544708 python3.9[192085]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:22:07 np0005544708 python3.9[192237]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:22:07 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v466: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:22:08 np0005544708 python3.9[192389]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:22:09 np0005544708 python3.9[192514]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764796927.668142-554-193661436375389/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:09 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v467: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:22:10 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:22:10 np0005544708 python3.9[192666]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:22:10 np0005544708 podman[192763]: 2025-12-03 21:22:10.711780575 +0000 UTC m=+0.094309408 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible)
Dec  3 16:22:10 np0005544708 python3.9[192809]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764796929.5827339-554-266300659577984/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:11 np0005544708 python3.9[192968]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:22:11 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v468: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:22:12 np0005544708 python3.9[193093]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764796931.040219-554-278248195346666/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:13 np0005544708 python3.9[193245]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:22:13 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v469: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:22:14 np0005544708 python3.9[193370]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764796932.6429336-554-256132310814811/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:14 np0005544708 python3.9[193522]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:22:15 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:22:15 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v470: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:22:15 np0005544708 python3.9[193647]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764796934.209698-554-66459485172186/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:16 np0005544708 python3.9[193845]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:22:17 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:22:17 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:22:17 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  3 16:22:17 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:22:17 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  3 16:22:17 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:22:17 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  3 16:22:17 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  3 16:22:17 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  3 16:22:17 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:22:17 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:22:17 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:22:17 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:22:17 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:22:17 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:22:17 np0005544708 python3.9[194006]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764796936.1799395-554-131257519434828/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:17 np0005544708 podman[194146]: 2025-12-03 21:22:17.829690427 +0000 UTC m=+0.042190931 container create 1596133c9645282bdee465c891a2f000c8cdd0ad74da762446767d9533ddb92a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_bartik, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:22:17 np0005544708 systemd[1]: Started libpod-conmon-1596133c9645282bdee465c891a2f000c8cdd0ad74da762446767d9533ddb92a.scope.
Dec  3 16:22:17 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:22:17 np0005544708 podman[194146]: 2025-12-03 21:22:17.813451052 +0000 UTC m=+0.025951566 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:22:17 np0005544708 podman[194146]: 2025-12-03 21:22:17.91973249 +0000 UTC m=+0.132233134 container init 1596133c9645282bdee465c891a2f000c8cdd0ad74da762446767d9533ddb92a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_bartik, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:22:17 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v471: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:22:17 np0005544708 podman[194146]: 2025-12-03 21:22:17.931724481 +0000 UTC m=+0.144224975 container start 1596133c9645282bdee465c891a2f000c8cdd0ad74da762446767d9533ddb92a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_bartik, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:22:17 np0005544708 podman[194146]: 2025-12-03 21:22:17.9350365 +0000 UTC m=+0.147537034 container attach 1596133c9645282bdee465c891a2f000c8cdd0ad74da762446767d9533ddb92a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_bartik, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:22:17 np0005544708 tender_bartik[194185]: 167 167
Dec  3 16:22:17 np0005544708 systemd[1]: libpod-1596133c9645282bdee465c891a2f000c8cdd0ad74da762446767d9533ddb92a.scope: Deactivated successfully.
Dec  3 16:22:17 np0005544708 podman[194146]: 2025-12-03 21:22:17.940891137 +0000 UTC m=+0.153391681 container died 1596133c9645282bdee465c891a2f000c8cdd0ad74da762446767d9533ddb92a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:22:17 np0005544708 systemd[1]: var-lib-containers-storage-overlay-bd3e02ddf8cf9205891c48d3b4abbe5d181cad98400f22a508daa4e8809bc6bb-merged.mount: Deactivated successfully.
Dec  3 16:22:17 np0005544708 podman[194146]: 2025-12-03 21:22:17.991537154 +0000 UTC m=+0.204037688 container remove 1596133c9645282bdee465c891a2f000c8cdd0ad74da762446767d9533ddb92a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_bartik, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec  3 16:22:18 np0005544708 systemd[1]: libpod-conmon-1596133c9645282bdee465c891a2f000c8cdd0ad74da762446767d9533ddb92a.scope: Deactivated successfully.
Dec  3 16:22:18 np0005544708 podman[194262]: 2025-12-03 21:22:18.224411274 +0000 UTC m=+0.072921785 container create d91ce80a930f149e1355879979aaefddf6828b18a4dc30516ff16bdefb058c82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 16:22:18 np0005544708 python3.9[194256]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:22:18 np0005544708 systemd[1]: Started libpod-conmon-d91ce80a930f149e1355879979aaefddf6828b18a4dc30516ff16bdefb058c82.scope.
Dec  3 16:22:18 np0005544708 podman[194262]: 2025-12-03 21:22:18.195437197 +0000 UTC m=+0.043947778 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:22:18 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:22:18 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b2431858eecb2eb73468ef675f6a2a8a747b5c67bc4ae89aaf04cc2b9b1d5a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:22:18 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b2431858eecb2eb73468ef675f6a2a8a747b5c67bc4ae89aaf04cc2b9b1d5a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:22:18 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b2431858eecb2eb73468ef675f6a2a8a747b5c67bc4ae89aaf04cc2b9b1d5a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:22:18 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b2431858eecb2eb73468ef675f6a2a8a747b5c67bc4ae89aaf04cc2b9b1d5a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:22:18 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b2431858eecb2eb73468ef675f6a2a8a747b5c67bc4ae89aaf04cc2b9b1d5a6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:22:18 np0005544708 podman[194262]: 2025-12-03 21:22:18.314032806 +0000 UTC m=+0.162543387 container init d91ce80a930f149e1355879979aaefddf6828b18a4dc30516ff16bdefb058c82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_grothendieck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  3 16:22:18 np0005544708 podman[194262]: 2025-12-03 21:22:18.333730233 +0000 UTC m=+0.182240784 container start d91ce80a930f149e1355879979aaefddf6828b18a4dc30516ff16bdefb058c82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 16:22:18 np0005544708 podman[194262]: 2025-12-03 21:22:18.337843563 +0000 UTC m=+0.186354114 container attach d91ce80a930f149e1355879979aaefddf6828b18a4dc30516ff16bdefb058c82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:22:18 np0005544708 python3.9[194413]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764796937.7007084-554-64762695299329/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:18 np0005544708 sad_grothendieck[194279]: --> passed data devices: 0 physical, 3 LVM
Dec  3 16:22:18 np0005544708 sad_grothendieck[194279]: --> All data devices are unavailable
Dec  3 16:22:18 np0005544708 systemd[1]: libpod-d91ce80a930f149e1355879979aaefddf6828b18a4dc30516ff16bdefb058c82.scope: Deactivated successfully.
Dec  3 16:22:18 np0005544708 podman[194262]: 2025-12-03 21:22:18.961780722 +0000 UTC m=+0.810291233 container died d91ce80a930f149e1355879979aaefddf6828b18a4dc30516ff16bdefb058c82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_grothendieck, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec  3 16:22:18 np0005544708 systemd[1]: var-lib-containers-storage-overlay-3b2431858eecb2eb73468ef675f6a2a8a747b5c67bc4ae89aaf04cc2b9b1d5a6-merged.mount: Deactivated successfully.
Dec  3 16:22:19 np0005544708 podman[194262]: 2025-12-03 21:22:19.008355931 +0000 UTC m=+0.856866462 container remove d91ce80a930f149e1355879979aaefddf6828b18a4dc30516ff16bdefb058c82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_grothendieck, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  3 16:22:19 np0005544708 systemd[1]: libpod-conmon-d91ce80a930f149e1355879979aaefddf6828b18a4dc30516ff16bdefb058c82.scope: Deactivated successfully.
Dec  3 16:22:19 np0005544708 podman[194645]: 2025-12-03 21:22:19.474186053 +0000 UTC m=+0.054540033 container create d00bb9642d0fc392aea27b0abcf7ba62e68830cfdfa4d8987c052d2125e7caaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lamport, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  3 16:22:19 np0005544708 systemd[1]: Started libpod-conmon-d00bb9642d0fc392aea27b0abcf7ba62e68830cfdfa4d8987c052d2125e7caaf.scope.
Dec  3 16:22:19 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:22:19 np0005544708 podman[194645]: 2025-12-03 21:22:19.446890741 +0000 UTC m=+0.027244781 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:22:19 np0005544708 podman[194645]: 2025-12-03 21:22:19.554256889 +0000 UTC m=+0.134610869 container init d00bb9642d0fc392aea27b0abcf7ba62e68830cfdfa4d8987c052d2125e7caaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lamport, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:22:19 np0005544708 podman[194645]: 2025-12-03 21:22:19.5602875 +0000 UTC m=+0.140641460 container start d00bb9642d0fc392aea27b0abcf7ba62e68830cfdfa4d8987c052d2125e7caaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:22:19 np0005544708 podman[194645]: 2025-12-03 21:22:19.563905887 +0000 UTC m=+0.144259877 container attach d00bb9642d0fc392aea27b0abcf7ba62e68830cfdfa4d8987c052d2125e7caaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:22:19 np0005544708 festive_lamport[194663]: 167 167
Dec  3 16:22:19 np0005544708 systemd[1]: libpod-d00bb9642d0fc392aea27b0abcf7ba62e68830cfdfa4d8987c052d2125e7caaf.scope: Deactivated successfully.
Dec  3 16:22:19 np0005544708 conmon[194663]: conmon d00bb9642d0fc392aea2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d00bb9642d0fc392aea27b0abcf7ba62e68830cfdfa4d8987c052d2125e7caaf.scope/container/memory.events
Dec  3 16:22:19 np0005544708 podman[194645]: 2025-12-03 21:22:19.566816075 +0000 UTC m=+0.147170035 container died d00bb9642d0fc392aea27b0abcf7ba62e68830cfdfa4d8987c052d2125e7caaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lamport, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:22:19 np0005544708 python3.9[194647]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:22:19 np0005544708 systemd[1]: var-lib-containers-storage-overlay-575d8cb8f6be56bda2f1e0d835fd17cedd2edaf049bddea5845a710c0568f0fe-merged.mount: Deactivated successfully.
Dec  3 16:22:19 np0005544708 podman[194645]: 2025-12-03 21:22:19.610363092 +0000 UTC m=+0.190717052 container remove d00bb9642d0fc392aea27b0abcf7ba62e68830cfdfa4d8987c052d2125e7caaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lamport, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:22:19 np0005544708 systemd[1]: libpod-conmon-d00bb9642d0fc392aea27b0abcf7ba62e68830cfdfa4d8987c052d2125e7caaf.scope: Deactivated successfully.
Dec  3 16:22:19 np0005544708 podman[194718]: 2025-12-03 21:22:19.806379695 +0000 UTC m=+0.067660545 container create c1b5cc1943ab2e35760ee83fefa3a1a423a92dc1437d3a78c33dd0911fb97805 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hermann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:22:19 np0005544708 systemd[1]: Started libpod-conmon-c1b5cc1943ab2e35760ee83fefa3a1a423a92dc1437d3a78c33dd0911fb97805.scope.
Dec  3 16:22:19 np0005544708 podman[194718]: 2025-12-03 21:22:19.779017141 +0000 UTC m=+0.040298041 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:22:19 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:22:19 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47e2e01379306566ac285a67c4bc63d0df766e14055221e9da744db967051ed8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:22:19 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47e2e01379306566ac285a67c4bc63d0df766e14055221e9da744db967051ed8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:22:19 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47e2e01379306566ac285a67c4bc63d0df766e14055221e9da744db967051ed8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:22:19 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47e2e01379306566ac285a67c4bc63d0df766e14055221e9da744db967051ed8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:22:19 np0005544708 podman[194718]: 2025-12-03 21:22:19.923509102 +0000 UTC m=+0.184789962 container init c1b5cc1943ab2e35760ee83fefa3a1a423a92dc1437d3a78c33dd0911fb97805 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:22:19 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v472: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:22:19 np0005544708 podman[194718]: 2025-12-03 21:22:19.940519818 +0000 UTC m=+0.201800668 container start c1b5cc1943ab2e35760ee83fefa3a1a423a92dc1437d3a78c33dd0911fb97805 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hermann, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:22:19 np0005544708 podman[194718]: 2025-12-03 21:22:19.944964208 +0000 UTC m=+0.206245058 container attach c1b5cc1943ab2e35760ee83fefa3a1a423a92dc1437d3a78c33dd0911fb97805 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]: {
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:    "0": [
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:        {
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:            "devices": [
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "/dev/loop3"
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:            ],
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:            "lv_name": "ceph_lv0",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:            "lv_size": "21470642176",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d33bc95-baf8-481d-bc78-3b15ffd29872,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:            "lv_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:            "name": "ceph_lv0",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:            "tags": {
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.block_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.cluster_name": "ceph",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.crush_device_class": "",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.encrypted": "0",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.objectstore": "bluestore",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.osd_fsid": "4d33bc95-baf8-481d-bc78-3b15ffd29872",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.osd_id": "0",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.type": "block",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.vdo": "0",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.with_tpm": "0"
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:            },
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:            "type": "block",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:            "vg_name": "ceph_vg0"
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:        }
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:    ],
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:    "1": [
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:        {
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:            "devices": [
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "/dev/loop4"
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:            ],
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:            "lv_name": "ceph_lv1",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:            "lv_size": "21470642176",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c4086f1b-ff53-4e63-8dc0-011238d77976,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:            "lv_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:            "name": "ceph_lv1",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:            "tags": {
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.block_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.cluster_name": "ceph",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.crush_device_class": "",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.encrypted": "0",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.objectstore": "bluestore",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.osd_fsid": "c4086f1b-ff53-4e63-8dc0-011238d77976",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.osd_id": "1",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.type": "block",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.vdo": "0",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.with_tpm": "0"
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:            },
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:            "type": "block",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:            "vg_name": "ceph_vg1"
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:        }
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:    ],
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:    "2": [
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:        {
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:            "devices": [
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "/dev/loop5"
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:            ],
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:            "lv_name": "ceph_lv2",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:            "lv_size": "21470642176",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=abcd6a67-9013-4470-978f-f75da5f33cd4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:            "lv_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:            "name": "ceph_lv2",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:            "tags": {
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.block_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.cluster_name": "ceph",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.crush_device_class": "",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.encrypted": "0",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.objectstore": "bluestore",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.osd_fsid": "abcd6a67-9013-4470-978f-f75da5f33cd4",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.osd_id": "2",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.type": "block",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.vdo": "0",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:                "ceph.with_tpm": "0"
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:            },
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:            "type": "block",
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:            "vg_name": "ceph_vg2"
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:        }
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]:    ]
Dec  3 16:22:20 np0005544708 compassionate_hermann[194773]: }
Dec  3 16:22:20 np0005544708 python3.9[194830]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764796939.0907476-554-99759697170507/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:20 np0005544708 systemd[1]: libpod-c1b5cc1943ab2e35760ee83fefa3a1a423a92dc1437d3a78c33dd0911fb97805.scope: Deactivated successfully.
Dec  3 16:22:20 np0005544708 podman[194718]: 2025-12-03 21:22:20.306234449 +0000 UTC m=+0.567515279 container died c1b5cc1943ab2e35760ee83fefa3a1a423a92dc1437d3a78c33dd0911fb97805 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hermann, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:22:20 np0005544708 systemd[1]: var-lib-containers-storage-overlay-47e2e01379306566ac285a67c4bc63d0df766e14055221e9da744db967051ed8-merged.mount: Deactivated successfully.
Dec  3 16:22:20 np0005544708 podman[194718]: 2025-12-03 21:22:20.347096044 +0000 UTC m=+0.608376874 container remove c1b5cc1943ab2e35760ee83fefa3a1a423a92dc1437d3a78c33dd0911fb97805 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec  3 16:22:20 np0005544708 systemd[1]: libpod-conmon-c1b5cc1943ab2e35760ee83fefa3a1a423a92dc1437d3a78c33dd0911fb97805.scope: Deactivated successfully.
Dec  3 16:22:20 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:22:20 np0005544708 podman[195029]: 2025-12-03 21:22:20.871168256 +0000 UTC m=+0.075216506 container create 9f6f16d8f0d6a4be873f400ce4af93c3a6f39cb8e5221849c6428ae769da5faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:22:20 np0005544708 systemd[1]: Started libpod-conmon-9f6f16d8f0d6a4be873f400ce4af93c3a6f39cb8e5221849c6428ae769da5faa.scope.
Dec  3 16:22:20 np0005544708 podman[195029]: 2025-12-03 21:22:20.840074593 +0000 UTC m=+0.044122903 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:22:20 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:22:20 np0005544708 podman[195029]: 2025-12-03 21:22:20.971815803 +0000 UTC m=+0.175864063 container init 9f6f16d8f0d6a4be873f400ce4af93c3a6f39cb8e5221849c6428ae769da5faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_yonath, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True)
Dec  3 16:22:20 np0005544708 podman[195029]: 2025-12-03 21:22:20.982529729 +0000 UTC m=+0.186577949 container start 9f6f16d8f0d6a4be873f400ce4af93c3a6f39cb8e5221849c6428ae769da5faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:22:20 np0005544708 podman[195029]: 2025-12-03 21:22:20.985488558 +0000 UTC m=+0.189536878 container attach 9f6f16d8f0d6a4be873f400ce4af93c3a6f39cb8e5221849c6428ae769da5faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_yonath, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec  3 16:22:20 np0005544708 systemd[1]: libpod-9f6f16d8f0d6a4be873f400ce4af93c3a6f39cb8e5221849c6428ae769da5faa.scope: Deactivated successfully.
Dec  3 16:22:20 np0005544708 determined_yonath[195074]: 167 167
Dec  3 16:22:20 np0005544708 conmon[195074]: conmon 9f6f16d8f0d6a4be873f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9f6f16d8f0d6a4be873f400ce4af93c3a6f39cb8e5221849c6428ae769da5faa.scope/container/memory.events
Dec  3 16:22:20 np0005544708 podman[195029]: 2025-12-03 21:22:20.992323692 +0000 UTC m=+0.196371942 container died 9f6f16d8f0d6a4be873f400ce4af93c3a6f39cb8e5221849c6428ae769da5faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Dec  3 16:22:21 np0005544708 systemd[1]: var-lib-containers-storage-overlay-93df7e46bddb3fc0b2d9d58a8411833f911bfba73ae62a816aa7160b36e5a059-merged.mount: Deactivated successfully.
Dec  3 16:22:21 np0005544708 podman[195029]: 2025-12-03 21:22:21.032449157 +0000 UTC m=+0.236497387 container remove 9f6f16d8f0d6a4be873f400ce4af93c3a6f39cb8e5221849c6428ae769da5faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec  3 16:22:21 np0005544708 systemd[1]: libpod-conmon-9f6f16d8f0d6a4be873f400ce4af93c3a6f39cb8e5221849c6428ae769da5faa.scope: Deactivated successfully.
Dec  3 16:22:21 np0005544708 python3.9[195071]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Dec  3 16:22:21 np0005544708 podman[195099]: 2025-12-03 21:22:21.189104425 +0000 UTC m=+0.034139446 container create 17a5edae3c68f0909f10cd3b12afdf438ba1b280bd6a956e1daf98e344716ffd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Dec  3 16:22:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Optimize plan auto_2025-12-03_21:22:21
Dec  3 16:22:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 16:22:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] do_upmap
Dec  3 16:22:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'vms', 'cephfs.cephfs.data', 'backups']
Dec  3 16:22:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] prepared 0/10 upmap changes
Dec  3 16:22:21 np0005544708 systemd[1]: Started libpod-conmon-17a5edae3c68f0909f10cd3b12afdf438ba1b280bd6a956e1daf98e344716ffd.scope.
Dec  3 16:22:21 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:22:21 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cdfc30e8da3ebc004b8bc419cfba64cfc4fa379e8d1f814ae74d734d24bffa0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:22:21 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cdfc30e8da3ebc004b8bc419cfba64cfc4fa379e8d1f814ae74d734d24bffa0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:22:21 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cdfc30e8da3ebc004b8bc419cfba64cfc4fa379e8d1f814ae74d734d24bffa0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:22:21 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cdfc30e8da3ebc004b8bc419cfba64cfc4fa379e8d1f814ae74d734d24bffa0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:22:21 np0005544708 podman[195099]: 2025-12-03 21:22:21.17473417 +0000 UTC m=+0.019769221 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:22:21 np0005544708 podman[195099]: 2025-12-03 21:22:21.276049764 +0000 UTC m=+0.121084865 container init 17a5edae3c68f0909f10cd3b12afdf438ba1b280bd6a956e1daf98e344716ffd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:22:21 np0005544708 podman[195099]: 2025-12-03 21:22:21.287673806 +0000 UTC m=+0.132708867 container start 17a5edae3c68f0909f10cd3b12afdf438ba1b280bd6a956e1daf98e344716ffd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 16:22:21 np0005544708 podman[195099]: 2025-12-03 21:22:21.291523579 +0000 UTC m=+0.136558640 container attach 17a5edae3c68f0909f10cd3b12afdf438ba1b280bd6a956e1daf98e344716ffd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_swanson, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec  3 16:22:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:22:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:22:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:22:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:22:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:22:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:22:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 16:22:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:22:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 16:22:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:22:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:22:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:22:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:22:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:22:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:22:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:22:21 np0005544708 python3.9[195304]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:21 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v473: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:22:21 np0005544708 lvm[195351]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:22:21 np0005544708 lvm[195351]: VG ceph_vg1 finished
Dec  3 16:22:21 np0005544708 lvm[195352]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:22:21 np0005544708 lvm[195352]: VG ceph_vg0 finished
Dec  3 16:22:22 np0005544708 lvm[195371]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:22:22 np0005544708 lvm[195371]: VG ceph_vg2 finished
Dec  3 16:22:22 np0005544708 gallant_swanson[195139]: {}
Dec  3 16:22:22 np0005544708 systemd[1]: libpod-17a5edae3c68f0909f10cd3b12afdf438ba1b280bd6a956e1daf98e344716ffd.scope: Deactivated successfully.
Dec  3 16:22:22 np0005544708 systemd[1]: libpod-17a5edae3c68f0909f10cd3b12afdf438ba1b280bd6a956e1daf98e344716ffd.scope: Consumed 1.374s CPU time.
Dec  3 16:22:22 np0005544708 podman[195099]: 2025-12-03 21:22:22.176994617 +0000 UTC m=+1.022029678 container died 17a5edae3c68f0909f10cd3b12afdf438ba1b280bd6a956e1daf98e344716ffd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_swanson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:22:22 np0005544708 systemd[1]: var-lib-containers-storage-overlay-8cdfc30e8da3ebc004b8bc419cfba64cfc4fa379e8d1f814ae74d734d24bffa0-merged.mount: Deactivated successfully.
Dec  3 16:22:22 np0005544708 podman[195099]: 2025-12-03 21:22:22.229244046 +0000 UTC m=+1.074279067 container remove 17a5edae3c68f0909f10cd3b12afdf438ba1b280bd6a956e1daf98e344716ffd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_swanson, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 16:22:22 np0005544708 systemd[1]: libpod-conmon-17a5edae3c68f0909f10cd3b12afdf438ba1b280bd6a956e1daf98e344716ffd.scope: Deactivated successfully.
Dec  3 16:22:22 np0005544708 podman[195406]: 2025-12-03 21:22:22.265628951 +0000 UTC m=+0.056196177 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  3 16:22:22 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:22:22 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:22:22 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:22:22 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:22:22 np0005544708 python3.9[195558]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:23 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:22:23 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:22:23 np0005544708 python3.9[195710]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:23 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v474: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:22:24 np0005544708 python3.9[195862]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:25 np0005544708 python3.9[196014]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:25 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:22:25 np0005544708 python3.9[196166]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:25 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v475: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:22:26 np0005544708 python3.9[196318]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:27 np0005544708 python3.9[196470]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 16:22:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:22:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 16:22:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:22:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:22:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:22:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:22:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:22:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:22:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:22:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:22:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:22:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.7344613959254126e-06 of space, bias 4.0, pg target 0.002081353675110495 quantized to 16 (current 16)
Dec  3 16:22:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:22:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:22:27 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v476: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:22:28 np0005544708 python3.9[196622]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:28 np0005544708 python3.9[196774]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:29 np0005544708 python3.9[196926]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:29 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v477: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:22:30 np0005544708 python3.9[197078]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:30 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:22:31 np0005544708 python3.9[197230]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:31 np0005544708 python3.9[197382]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:31 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v478: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:22:32 np0005544708 python3.9[197534]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:22:33 np0005544708 python3.9[197657]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764796952.0588698-775-125735920412027/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:33 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v479: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:22:34 np0005544708 python3.9[197809]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:22:34 np0005544708 python3.9[197932]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764796953.669058-775-113652748602020/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:35 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:22:35 np0005544708 python3.9[198084]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:22:35 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v480: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:22:36 np0005544708 python3.9[198207]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764796955.0608366-775-8421491780627/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:36 np0005544708 python3.9[198359]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:22:37 np0005544708 python3.9[198482]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764796956.3651183-775-91283292380072/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:37 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v481: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:22:38 np0005544708 python3.9[198634]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:22:39 np0005544708 python3.9[198757]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764796957.9022694-775-150774375380999/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:39 np0005544708 python3.9[198909]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:22:39 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v482: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:22:40 np0005544708 python3.9[199032]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764796959.2957637-775-157251909379190/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:40 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:22:40 np0005544708 podman[199156]: 2025-12-03 21:22:40.997430559 +0000 UTC m=+0.133871649 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec  3 16:22:41 np0005544708 python3.9[199206]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:22:41 np0005544708 python3.9[199334]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764796960.5886157-775-98184076052581/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:41 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v483: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:22:42 np0005544708 python3.9[199486]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:22:43 np0005544708 python3.9[199609]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764796961.943828-775-108482491602721/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:43 np0005544708 python3.9[199761]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:22:43 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v484: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:22:44 np0005544708 python3.9[199884]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764796963.393386-775-204007469252364/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:45 np0005544708 python3.9[200036]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:22:45 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:22:45 np0005544708 python3.9[200159]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764796964.730089-775-161098405647775/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:45 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v485: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:22:46 np0005544708 python3.9[200311]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:22:47 np0005544708 python3.9[200434]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764796966.1547184-775-233027491023874/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:47 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v486: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:22:48 np0005544708 python3.9[200586]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:22:48 np0005544708 python3.9[200709]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764796967.4964285-775-267235759890829/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:22:48.925 151937 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:22:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:22:48.926 151937 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:22:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:22:48.926 151937 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:22:49 np0005544708 python3.9[200861]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:22:49 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v487: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:22:50 np0005544708 python3.9[200984]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764796968.871055-775-191039360193399/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:50 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:22:50 np0005544708 python3.9[201136]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:22:51 np0005544708 python3.9[201259]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764796970.2417803-775-218587424752784/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:22:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:22:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:22:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:22:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:22:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:22:51 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v488: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:22:52 np0005544708 python3.9[201409]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:22:53 np0005544708 podman[201536]: 2025-12-03 21:22:53.093946717 +0000 UTC m=+0.091909494 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  3 16:22:53 np0005544708 python3.9[201581]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec  3 16:22:53 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v489: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:22:55 np0005544708 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Dec  3 16:22:55 np0005544708 python3.9[201739]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:55 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:22:55 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v490: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:22:55 np0005544708 python3.9[201891]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:56 np0005544708 python3.9[202043]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:57 np0005544708 python3.9[202195]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:57 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v491: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:22:58 np0005544708 python3.9[202347]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:59 np0005544708 python3.9[202499]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:59 np0005544708 python3.9[202651]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:22:59 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v492: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:23:00 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:23:00 np0005544708 python3.9[202803]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:23:01 np0005544708 auditd[706]: Audit daemon rotating log files
Dec  3 16:23:01 np0005544708 python3.9[202955]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:23:01 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v493: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:23:02 np0005544708 python3.9[203107]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:23:03 np0005544708 python3.9[203259]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 16:23:03 np0005544708 systemd[1]: Reloading.
Dec  3 16:23:03 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:23:03 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:23:03 np0005544708 systemd[1]: Starting libvirt logging daemon socket...
Dec  3 16:23:03 np0005544708 systemd[1]: Listening on libvirt logging daemon socket.
Dec  3 16:23:03 np0005544708 systemd[1]: Starting libvirt logging daemon admin socket...
Dec  3 16:23:03 np0005544708 systemd[1]: Listening on libvirt logging daemon admin socket.
Dec  3 16:23:03 np0005544708 systemd[1]: Starting libvirt logging daemon...
Dec  3 16:23:03 np0005544708 systemd[1]: Started libvirt logging daemon.
Dec  3 16:23:03 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v494: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:23:04 np0005544708 python3.9[203452]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 16:23:04 np0005544708 systemd[1]: Reloading.
Dec  3 16:23:04 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:23:04 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:23:05 np0005544708 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Dec  3 16:23:05 np0005544708 systemd[1]: Starting libvirt nodedev daemon socket...
Dec  3 16:23:05 np0005544708 systemd[1]: Listening on libvirt nodedev daemon socket.
Dec  3 16:23:05 np0005544708 systemd[1]: Starting libvirt nodedev daemon admin socket...
Dec  3 16:23:05 np0005544708 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Dec  3 16:23:05 np0005544708 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Dec  3 16:23:05 np0005544708 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Dec  3 16:23:05 np0005544708 systemd[1]: Starting libvirt nodedev daemon...
Dec  3 16:23:05 np0005544708 systemd[1]: Started libvirt nodedev daemon.
Dec  3 16:23:05 np0005544708 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Dec  3 16:23:05 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:23:05 np0005544708 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Dec  3 16:23:05 np0005544708 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Dec  3 16:23:05 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v495: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:23:06 np0005544708 python3.9[203676]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 16:23:06 np0005544708 systemd[1]: Reloading.
Dec  3 16:23:06 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:23:06 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:23:06 np0005544708 systemd[1]: Starting libvirt proxy daemon admin socket...
Dec  3 16:23:06 np0005544708 systemd[1]: Starting libvirt proxy daemon read-only socket...
Dec  3 16:23:06 np0005544708 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Dec  3 16:23:06 np0005544708 systemd[1]: Listening on libvirt proxy daemon admin socket.
Dec  3 16:23:06 np0005544708 systemd[1]: Starting libvirt proxy daemon...
Dec  3 16:23:06 np0005544708 systemd[1]: Started libvirt proxy daemon.
Dec  3 16:23:06 np0005544708 setroubleshoot[203489]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 24592f25-fc0e-4d0c-8df9-da19c4f92acf
Dec  3 16:23:06 np0005544708 setroubleshoot[203489]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Dec  3 16:23:06 np0005544708 setroubleshoot[203489]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 24592f25-fc0e-4d0c-8df9-da19c4f92acf
Dec  3 16:23:06 np0005544708 setroubleshoot[203489]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Dec  3 16:23:07 np0005544708 python3.9[203891]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 16:23:07 np0005544708 systemd[1]: Reloading.
Dec  3 16:23:07 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:23:07 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:23:07 np0005544708 systemd[1]: Listening on libvirt locking daemon socket.
Dec  3 16:23:07 np0005544708 systemd[1]: Starting libvirt QEMU daemon socket...
Dec  3 16:23:07 np0005544708 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec  3 16:23:07 np0005544708 systemd[1]: Starting Virtual Machine and Container Registration Service...
Dec  3 16:23:07 np0005544708 systemd[1]: Listening on libvirt QEMU daemon socket.
Dec  3 16:23:07 np0005544708 systemd[1]: Starting libvirt QEMU daemon admin socket...
Dec  3 16:23:07 np0005544708 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Dec  3 16:23:07 np0005544708 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Dec  3 16:23:07 np0005544708 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Dec  3 16:23:07 np0005544708 systemd[1]: Started Virtual Machine and Container Registration Service.
Dec  3 16:23:07 np0005544708 systemd[1]: Starting libvirt QEMU daemon...
Dec  3 16:23:07 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v496: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:23:07 np0005544708 systemd[1]: Started libvirt QEMU daemon.
Dec  3 16:23:08 np0005544708 python3.9[204106]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 16:23:08 np0005544708 systemd[1]: Reloading.
Dec  3 16:23:08 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:23:08 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:23:09 np0005544708 systemd[1]: Starting libvirt secret daemon socket...
Dec  3 16:23:09 np0005544708 systemd[1]: Listening on libvirt secret daemon socket.
Dec  3 16:23:09 np0005544708 systemd[1]: Starting libvirt secret daemon admin socket...
Dec  3 16:23:09 np0005544708 systemd[1]: Starting libvirt secret daemon read-only socket...
Dec  3 16:23:09 np0005544708 systemd[1]: Listening on libvirt secret daemon admin socket.
Dec  3 16:23:09 np0005544708 systemd[1]: Listening on libvirt secret daemon read-only socket.
Dec  3 16:23:09 np0005544708 systemd[1]: Starting libvirt secret daemon...
Dec  3 16:23:09 np0005544708 systemd[1]: Started libvirt secret daemon.
Dec  3 16:23:09 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v497: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:23:10 np0005544708 python3.9[204319]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:23:10 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:23:10 np0005544708 python3.9[204471]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  3 16:23:11 np0005544708 podman[204485]: 2025-12-03 21:23:11.200861189 +0000 UTC m=+0.127761589 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 16:23:11 np0005544708 python3.9[204650]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:23:11 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v498: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:23:12 np0005544708 python3.9[204804]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  3 16:23:13 np0005544708 python3.9[204954]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:23:13 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v499: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:23:14 np0005544708 python3.9[205075]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764796993.0570912-1133-244076723217688/.source.xml follow=False _original_basename=secret.xml.j2 checksum=eb399b9585c91cefe0c954882ea59ab92b0cdac8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:23:15 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:23:15 np0005544708 python3.9[205227]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine c21de27e-a7fd-594b-8324-0697ba9aab3a#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:23:15 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v500: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:23:16 np0005544708 python3.9[205389]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:23:16 np0005544708 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Dec  3 16:23:16 np0005544708 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.030s CPU time.
Dec  3 16:23:16 np0005544708 systemd[1]: setroubleshootd.service: Deactivated successfully.
Dec  3 16:23:17 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v501: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:23:19 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v502: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:23:20 np0005544708 python3.9[205852]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:23:20 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:23:20 np0005544708 python3.9[206004]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:23:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Optimize plan auto_2025-12-03_21:23:21
Dec  3 16:23:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 16:23:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] do_upmap
Dec  3 16:23:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] pools ['.mgr', 'vms', 'images', 'volumes', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data']
Dec  3 16:23:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] prepared 0/10 upmap changes
Dec  3 16:23:21 np0005544708 python3.9[206127]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764797000.3141425-1188-69356850246978/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:23:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:23:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:23:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:23:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:23:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:23:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:23:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 16:23:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:23:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 16:23:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:23:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:23:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:23:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:23:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:23:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:23:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:23:21 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v503: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:23:22 np0005544708 python3.9[206302]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:23:23 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:23:23 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:23:23 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  3 16:23:23 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:23:23 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  3 16:23:23 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:23:23 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  3 16:23:23 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  3 16:23:23 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  3 16:23:23 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:23:23 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:23:23 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:23:23 np0005544708 podman[206484]: 2025-12-03 21:23:23.26938265 +0000 UTC m=+0.065613892 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  3 16:23:23 np0005544708 python3.9[206533]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:23:23 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:23:23 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:23:23 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:23:23 np0005544708 podman[206622]: 2025-12-03 21:23:23.687446511 +0000 UTC m=+0.048121469 container create 5b9309714ec9d3591158c7e9fab166591a04bbfd61d2788acf31b09983515270 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_diffie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 16:23:23 np0005544708 systemd[1]: Started libpod-conmon-5b9309714ec9d3591158c7e9fab166591a04bbfd61d2788acf31b09983515270.scope.
Dec  3 16:23:23 np0005544708 podman[206622]: 2025-12-03 21:23:23.670640948 +0000 UTC m=+0.031315946 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:23:23 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:23:23 np0005544708 podman[206622]: 2025-12-03 21:23:23.800988155 +0000 UTC m=+0.161663143 container init 5b9309714ec9d3591158c7e9fab166591a04bbfd61d2788acf31b09983515270 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_diffie, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:23:23 np0005544708 podman[206622]: 2025-12-03 21:23:23.808720964 +0000 UTC m=+0.169395922 container start 5b9309714ec9d3591158c7e9fab166591a04bbfd61d2788acf31b09983515270 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_diffie, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec  3 16:23:23 np0005544708 podman[206622]: 2025-12-03 21:23:23.812053474 +0000 UTC m=+0.172728472 container attach 5b9309714ec9d3591158c7e9fab166591a04bbfd61d2788acf31b09983515270 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_diffie, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:23:23 np0005544708 competent_diffie[206686]: 167 167
Dec  3 16:23:23 np0005544708 systemd[1]: libpod-5b9309714ec9d3591158c7e9fab166591a04bbfd61d2788acf31b09983515270.scope: Deactivated successfully.
Dec  3 16:23:23 np0005544708 podman[206622]: 2025-12-03 21:23:23.814862349 +0000 UTC m=+0.175537357 container died 5b9309714ec9d3591158c7e9fab166591a04bbfd61d2788acf31b09983515270 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_diffie, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec  3 16:23:23 np0005544708 systemd[1]: var-lib-containers-storage-overlay-7e2cef015a2e8ca04c3403b220cdf6f888d697c3b9eb95acf8094c07c8719cf5-merged.mount: Deactivated successfully.
Dec  3 16:23:23 np0005544708 podman[206622]: 2025-12-03 21:23:23.868090156 +0000 UTC m=+0.228765114 container remove 5b9309714ec9d3591158c7e9fab166591a04bbfd61d2788acf31b09983515270 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_diffie, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:23:23 np0005544708 systemd[1]: libpod-conmon-5b9309714ec9d3591158c7e9fab166591a04bbfd61d2788acf31b09983515270.scope: Deactivated successfully.
Dec  3 16:23:23 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v504: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:23:23 np0005544708 python3.9[206688]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:23:24 np0005544708 podman[206711]: 2025-12-03 21:23:24.103053777 +0000 UTC m=+0.074162882 container create e9aac3d9ebbe1f7442ce6f927913c9165650e2838b12108277d0653f9ef5cf0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:23:24 np0005544708 systemd[1]: Started libpod-conmon-e9aac3d9ebbe1f7442ce6f927913c9165650e2838b12108277d0653f9ef5cf0c.scope.
Dec  3 16:23:24 np0005544708 podman[206711]: 2025-12-03 21:23:24.071256439 +0000 UTC m=+0.042365594 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:23:24 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:23:24 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2f332692e413ef3b8093cfc00b992e82812c800508cc7e8136f6c6f2914f586/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:23:24 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2f332692e413ef3b8093cfc00b992e82812c800508cc7e8136f6c6f2914f586/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:23:24 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2f332692e413ef3b8093cfc00b992e82812c800508cc7e8136f6c6f2914f586/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:23:24 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2f332692e413ef3b8093cfc00b992e82812c800508cc7e8136f6c6f2914f586/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:23:24 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2f332692e413ef3b8093cfc00b992e82812c800508cc7e8136f6c6f2914f586/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:23:24 np0005544708 podman[206711]: 2025-12-03 21:23:24.2173172 +0000 UTC m=+0.188426295 container init e9aac3d9ebbe1f7442ce6f927913c9165650e2838b12108277d0653f9ef5cf0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_matsumoto, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:23:24 np0005544708 podman[206711]: 2025-12-03 21:23:24.235162872 +0000 UTC m=+0.206271977 container start e9aac3d9ebbe1f7442ce6f927913c9165650e2838b12108277d0653f9ef5cf0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:23:24 np0005544708 podman[206711]: 2025-12-03 21:23:24.240202318 +0000 UTC m=+0.211311403 container attach e9aac3d9ebbe1f7442ce6f927913c9165650e2838b12108277d0653f9ef5cf0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_matsumoto, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:23:24 np0005544708 awesome_matsumoto[206752]: --> passed data devices: 0 physical, 3 LVM
Dec  3 16:23:24 np0005544708 awesome_matsumoto[206752]: --> All data devices are unavailable
Dec  3 16:23:24 np0005544708 systemd[1]: libpod-e9aac3d9ebbe1f7442ce6f927913c9165650e2838b12108277d0653f9ef5cf0c.scope: Deactivated successfully.
Dec  3 16:23:24 np0005544708 podman[206711]: 2025-12-03 21:23:24.829955293 +0000 UTC m=+0.801064358 container died e9aac3d9ebbe1f7442ce6f927913c9165650e2838b12108277d0653f9ef5cf0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_matsumoto, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec  3 16:23:24 np0005544708 systemd[1]: var-lib-containers-storage-overlay-b2f332692e413ef3b8093cfc00b992e82812c800508cc7e8136f6c6f2914f586-merged.mount: Deactivated successfully.
Dec  3 16:23:24 np0005544708 podman[206711]: 2025-12-03 21:23:24.898863022 +0000 UTC m=+0.869972127 container remove e9aac3d9ebbe1f7442ce6f927913c9165650e2838b12108277d0653f9ef5cf0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_matsumoto, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  3 16:23:24 np0005544708 python3.9[206891]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:23:24 np0005544708 systemd[1]: libpod-conmon-e9aac3d9ebbe1f7442ce6f927913c9165650e2838b12108277d0653f9ef5cf0c.scope: Deactivated successfully.
Dec  3 16:23:25 np0005544708 python3.9[207040]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.76bi4mu5 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:23:25 np0005544708 podman[207054]: 2025-12-03 21:23:25.430941211 +0000 UTC m=+0.062910188 container create 0c6268004dffb7241b4d788f75e9036bf64d8d6db4a46fdd1f8be19748ecb9da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_lovelace, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:23:25 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:23:25 np0005544708 systemd[1]: Started libpod-conmon-0c6268004dffb7241b4d788f75e9036bf64d8d6db4a46fdd1f8be19748ecb9da.scope.
Dec  3 16:23:25 np0005544708 podman[207054]: 2025-12-03 21:23:25.40829094 +0000 UTC m=+0.040259927 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:23:25 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:23:25 np0005544708 podman[207054]: 2025-12-03 21:23:25.529720808 +0000 UTC m=+0.161689795 container init 0c6268004dffb7241b4d788f75e9036bf64d8d6db4a46fdd1f8be19748ecb9da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_lovelace, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec  3 16:23:25 np0005544708 podman[207054]: 2025-12-03 21:23:25.542332267 +0000 UTC m=+0.174301234 container start 0c6268004dffb7241b4d788f75e9036bf64d8d6db4a46fdd1f8be19748ecb9da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_lovelace, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec  3 16:23:25 np0005544708 podman[207054]: 2025-12-03 21:23:25.545933734 +0000 UTC m=+0.177902721 container attach 0c6268004dffb7241b4d788f75e9036bf64d8d6db4a46fdd1f8be19748ecb9da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec  3 16:23:25 np0005544708 infallible_lovelace[207087]: 167 167
Dec  3 16:23:25 np0005544708 systemd[1]: libpod-0c6268004dffb7241b4d788f75e9036bf64d8d6db4a46fdd1f8be19748ecb9da.scope: Deactivated successfully.
Dec  3 16:23:25 np0005544708 podman[207054]: 2025-12-03 21:23:25.549975804 +0000 UTC m=+0.181944791 container died 0c6268004dffb7241b4d788f75e9036bf64d8d6db4a46fdd1f8be19748ecb9da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:23:25 np0005544708 systemd[1]: var-lib-containers-storage-overlay-89d8306c9ab86f21e9d7f2ffee11c455f66a8ff55e491246eaf457f0465bca13-merged.mount: Deactivated successfully.
Dec  3 16:23:25 np0005544708 podman[207054]: 2025-12-03 21:23:25.600729674 +0000 UTC m=+0.232698631 container remove 0c6268004dffb7241b4d788f75e9036bf64d8d6db4a46fdd1f8be19748ecb9da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_lovelace, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:23:25 np0005544708 systemd[1]: libpod-conmon-0c6268004dffb7241b4d788f75e9036bf64d8d6db4a46fdd1f8be19748ecb9da.scope: Deactivated successfully.
Dec  3 16:23:25 np0005544708 podman[207163]: 2025-12-03 21:23:25.812986412 +0000 UTC m=+0.060044392 container create 983abcff2e77e606016d68776cade8ea64cb9d796d4867c013ab6627599a62aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_yonath, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:23:25 np0005544708 systemd[1]: Started libpod-conmon-983abcff2e77e606016d68776cade8ea64cb9d796d4867c013ab6627599a62aa.scope.
Dec  3 16:23:25 np0005544708 podman[207163]: 2025-12-03 21:23:25.782679284 +0000 UTC m=+0.029737314 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:23:25 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:23:25 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79224915a610e681cbfa332a46369a049c7238088ca88e71988cf29d01f20e4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:23:25 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79224915a610e681cbfa332a46369a049c7238088ca88e71988cf29d01f20e4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:23:25 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79224915a610e681cbfa332a46369a049c7238088ca88e71988cf29d01f20e4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:23:25 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79224915a610e681cbfa332a46369a049c7238088ca88e71988cf29d01f20e4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:23:25 np0005544708 podman[207163]: 2025-12-03 21:23:25.905159199 +0000 UTC m=+0.152217229 container init 983abcff2e77e606016d68776cade8ea64cb9d796d4867c013ab6627599a62aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_yonath, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:23:25 np0005544708 podman[207163]: 2025-12-03 21:23:25.922153998 +0000 UTC m=+0.169211968 container start 983abcff2e77e606016d68776cade8ea64cb9d796d4867c013ab6627599a62aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_yonath, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:23:25 np0005544708 podman[207163]: 2025-12-03 21:23:25.925622401 +0000 UTC m=+0.172680441 container attach 983abcff2e77e606016d68776cade8ea64cb9d796d4867c013ab6627599a62aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_yonath, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:23:25 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v505: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:23:26 np0005544708 python3.9[207266]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]: {
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:    "0": [
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:        {
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:            "devices": [
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "/dev/loop3"
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:            ],
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:            "lv_name": "ceph_lv0",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:            "lv_size": "21470642176",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d33bc95-baf8-481d-bc78-3b15ffd29872,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:            "lv_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:            "name": "ceph_lv0",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:            "tags": {
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.block_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.cluster_name": "ceph",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.crush_device_class": "",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.encrypted": "0",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.objectstore": "bluestore",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.osd_fsid": "4d33bc95-baf8-481d-bc78-3b15ffd29872",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.osd_id": "0",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.type": "block",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.vdo": "0",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.with_tpm": "0"
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:            },
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:            "type": "block",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:            "vg_name": "ceph_vg0"
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:        }
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:    ],
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:    "1": [
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:        {
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:            "devices": [
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "/dev/loop4"
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:            ],
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:            "lv_name": "ceph_lv1",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:            "lv_size": "21470642176",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c4086f1b-ff53-4e63-8dc0-011238d77976,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:            "lv_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:            "name": "ceph_lv1",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:            "tags": {
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.block_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.cluster_name": "ceph",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.crush_device_class": "",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.encrypted": "0",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.objectstore": "bluestore",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.osd_fsid": "c4086f1b-ff53-4e63-8dc0-011238d77976",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.osd_id": "1",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.type": "block",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.vdo": "0",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.with_tpm": "0"
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:            },
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:            "type": "block",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:            "vg_name": "ceph_vg1"
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:        }
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:    ],
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:    "2": [
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:        {
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:            "devices": [
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "/dev/loop5"
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:            ],
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:            "lv_name": "ceph_lv2",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:            "lv_size": "21470642176",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=abcd6a67-9013-4470-978f-f75da5f33cd4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:            "lv_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:            "name": "ceph_lv2",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:            "tags": {
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.block_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.cluster_name": "ceph",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.crush_device_class": "",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.encrypted": "0",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.objectstore": "bluestore",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.osd_fsid": "abcd6a67-9013-4470-978f-f75da5f33cd4",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.osd_id": "2",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.type": "block",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.vdo": "0",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:                "ceph.with_tpm": "0"
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:            },
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:            "type": "block",
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:            "vg_name": "ceph_vg2"
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:        }
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]:    ]
Dec  3 16:23:26 np0005544708 jovial_yonath[207209]: }
Dec  3 16:23:26 np0005544708 systemd[1]: libpod-983abcff2e77e606016d68776cade8ea64cb9d796d4867c013ab6627599a62aa.scope: Deactivated successfully.
Dec  3 16:23:26 np0005544708 podman[207273]: 2025-12-03 21:23:26.356811967 +0000 UTC m=+0.040009111 container died 983abcff2e77e606016d68776cade8ea64cb9d796d4867c013ab6627599a62aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 16:23:26 np0005544708 systemd[1]: var-lib-containers-storage-overlay-79224915a610e681cbfa332a46369a049c7238088ca88e71988cf29d01f20e4d-merged.mount: Deactivated successfully.
Dec  3 16:23:26 np0005544708 podman[207273]: 2025-12-03 21:23:26.406364424 +0000 UTC m=+0.089561558 container remove 983abcff2e77e606016d68776cade8ea64cb9d796d4867c013ab6627599a62aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:23:26 np0005544708 systemd[1]: libpod-conmon-983abcff2e77e606016d68776cade8ea64cb9d796d4867c013ab6627599a62aa.scope: Deactivated successfully.
Dec  3 16:23:26 np0005544708 python3.9[207387]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:23:27 np0005544708 podman[207450]: 2025-12-03 21:23:27.033247221 +0000 UTC m=+0.066471934 container create 142e6d34b327cd7fd07e4ba0fe4e05c0290fc60c009745841bd4fdb1906733e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_shtern, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  3 16:23:27 np0005544708 systemd[1]: Started libpod-conmon-142e6d34b327cd7fd07e4ba0fe4e05c0290fc60c009745841bd4fdb1906733e5.scope.
Dec  3 16:23:27 np0005544708 podman[207450]: 2025-12-03 21:23:27.003872509 +0000 UTC m=+0.037097282 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:23:27 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:23:27 np0005544708 podman[207450]: 2025-12-03 21:23:27.115004607 +0000 UTC m=+0.148229310 container init 142e6d34b327cd7fd07e4ba0fe4e05c0290fc60c009745841bd4fdb1906733e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_shtern, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:23:27 np0005544708 podman[207450]: 2025-12-03 21:23:27.121975986 +0000 UTC m=+0.155200679 container start 142e6d34b327cd7fd07e4ba0fe4e05c0290fc60c009745841bd4fdb1906733e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_shtern, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:23:27 np0005544708 podman[207450]: 2025-12-03 21:23:27.124968836 +0000 UTC m=+0.158193529 container attach 142e6d34b327cd7fd07e4ba0fe4e05c0290fc60c009745841bd4fdb1906733e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec  3 16:23:27 np0005544708 objective_shtern[207512]: 167 167
Dec  3 16:23:27 np0005544708 systemd[1]: libpod-142e6d34b327cd7fd07e4ba0fe4e05c0290fc60c009745841bd4fdb1906733e5.scope: Deactivated successfully.
Dec  3 16:23:27 np0005544708 podman[207450]: 2025-12-03 21:23:27.127276969 +0000 UTC m=+0.160501672 container died 142e6d34b327cd7fd07e4ba0fe4e05c0290fc60c009745841bd4fdb1906733e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_shtern, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec  3 16:23:27 np0005544708 systemd[1]: var-lib-containers-storage-overlay-4c633a7b308c590406614c24f4fdba1ed6999b90fb872f1a00dbbd2fe059141b-merged.mount: Deactivated successfully.
Dec  3 16:23:27 np0005544708 podman[207450]: 2025-12-03 21:23:27.171695318 +0000 UTC m=+0.204920011 container remove 142e6d34b327cd7fd07e4ba0fe4e05c0290fc60c009745841bd4fdb1906733e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_shtern, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Dec  3 16:23:27 np0005544708 systemd[1]: libpod-conmon-142e6d34b327cd7fd07e4ba0fe4e05c0290fc60c009745841bd4fdb1906733e5.scope: Deactivated successfully.
Dec  3 16:23:27 np0005544708 podman[207615]: 2025-12-03 21:23:27.370118251 +0000 UTC m=+0.057116641 container create d04fa279c88dd200b477ce4bfc0758d9e702a4382fc11fb056d67ac51cc0cdbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_lalande, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec  3 16:23:27 np0005544708 systemd[1]: Started libpod-conmon-d04fa279c88dd200b477ce4bfc0758d9e702a4382fc11fb056d67ac51cc0cdbf.scope.
Dec  3 16:23:27 np0005544708 podman[207615]: 2025-12-03 21:23:27.343888234 +0000 UTC m=+0.030886684 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:23:27 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:23:27 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d11e009ee7eb74e0daaae94599661adc0e44348e177d6bda88d11ff8c452c7b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:23:27 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d11e009ee7eb74e0daaae94599661adc0e44348e177d6bda88d11ff8c452c7b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:23:27 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d11e009ee7eb74e0daaae94599661adc0e44348e177d6bda88d11ff8c452c7b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:23:27 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d11e009ee7eb74e0daaae94599661adc0e44348e177d6bda88d11ff8c452c7b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:23:27 np0005544708 podman[207615]: 2025-12-03 21:23:27.475169517 +0000 UTC m=+0.162167937 container init d04fa279c88dd200b477ce4bfc0758d9e702a4382fc11fb056d67ac51cc0cdbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec  3 16:23:27 np0005544708 podman[207615]: 2025-12-03 21:23:27.488194229 +0000 UTC m=+0.175192619 container start d04fa279c88dd200b477ce4bfc0758d9e702a4382fc11fb056d67ac51cc0cdbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_lalande, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:23:27 np0005544708 podman[207615]: 2025-12-03 21:23:27.492180906 +0000 UTC m=+0.179179286 container attach d04fa279c88dd200b477ce4bfc0758d9e702a4382fc11fb056d67ac51cc0cdbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:23:27 np0005544708 python3.9[207628]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:23:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 16:23:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:23:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 16:23:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:23:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:23:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:23:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:23:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:23:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:23:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:23:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:23:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:23:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.7344613959254126e-06 of space, bias 4.0, pg target 0.002081353675110495 quantized to 16 (current 16)
Dec  3 16:23:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:23:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:23:27 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v506: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:23:28 np0005544708 lvm[207812]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:23:28 np0005544708 lvm[207812]: VG ceph_vg0 finished
Dec  3 16:23:28 np0005544708 lvm[207813]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:23:28 np0005544708 lvm[207813]: VG ceph_vg1 finished
Dec  3 16:23:28 np0005544708 lvm[207816]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:23:28 np0005544708 lvm[207816]: VG ceph_vg2 finished
Dec  3 16:23:28 np0005544708 lvm[207841]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:23:28 np0005544708 lvm[207841]: VG ceph_vg2 finished
Dec  3 16:23:28 np0005544708 dreamy_lalande[207634]: {}
Dec  3 16:23:28 np0005544708 systemd[1]: libpod-d04fa279c88dd200b477ce4bfc0758d9e702a4382fc11fb056d67ac51cc0cdbf.scope: Deactivated successfully.
Dec  3 16:23:28 np0005544708 systemd[1]: libpod-d04fa279c88dd200b477ce4bfc0758d9e702a4382fc11fb056d67ac51cc0cdbf.scope: Consumed 1.340s CPU time.
Dec  3 16:23:28 np0005544708 podman[207615]: 2025-12-03 21:23:28.362553024 +0000 UTC m=+1.049551404 container died d04fa279c88dd200b477ce4bfc0758d9e702a4382fc11fb056d67ac51cc0cdbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_lalande, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Dec  3 16:23:28 np0005544708 systemd[1]: var-lib-containers-storage-overlay-d11e009ee7eb74e0daaae94599661adc0e44348e177d6bda88d11ff8c452c7b7-merged.mount: Deactivated successfully.
Dec  3 16:23:28 np0005544708 podman[207615]: 2025-12-03 21:23:28.418601097 +0000 UTC m=+1.105599457 container remove d04fa279c88dd200b477ce4bfc0758d9e702a4382fc11fb056d67ac51cc0cdbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_lalande, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:23:28 np0005544708 systemd[1]: libpod-conmon-d04fa279c88dd200b477ce4bfc0758d9e702a4382fc11fb056d67ac51cc0cdbf.scope: Deactivated successfully.
Dec  3 16:23:28 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:23:28 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:23:28 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:23:28 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:23:28 np0005544708 python3[207881]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  3 16:23:29 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:23:29 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:23:29 np0005544708 python3.9[208059]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:23:29 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v507: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:23:30 np0005544708 python3.9[208137]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:23:30 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:23:30 np0005544708 python3.9[208289]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:23:31 np0005544708 python3.9[208367]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:23:31 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v508: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:23:32 np0005544708 python3.9[208519]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:23:32 np0005544708 python3.9[208597]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:23:33 np0005544708 python3.9[208749]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:23:33 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v509: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:23:34 np0005544708 python3.9[208827]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:23:35 np0005544708 python3.9[208979]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:23:35 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:23:35 np0005544708 python3.9[209104]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764797014.4775033-1313-154968413418526/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:23:35 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v510: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:23:36 np0005544708 python3.9[209256]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:23:37 np0005544708 python3.9[209408]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:23:37 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v511: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:23:38 np0005544708 python3.9[209563]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:23:39 np0005544708 python3.9[209715]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:23:39 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v512: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:23:40 np0005544708 python3.9[209868]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:23:40 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:23:41 np0005544708 python3.9[210022]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:23:41 np0005544708 podman[210149]: 2025-12-03 21:23:41.764024556 +0000 UTC m=+0.130471482 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:23:41 np0005544708 python3.9[210196]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:23:41 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v513: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:23:42 np0005544708 python3.9[210355]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:23:43 np0005544708 python3.9[210478]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764797022.1774807-1385-148802998682543/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:23:43 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v514: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:23:44 np0005544708 python3.9[210630]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:23:45 np0005544708 python3.9[210753]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764797023.7467504-1400-156375556082131/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:23:45 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:23:45 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v515: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:23:45 np0005544708 python3.9[210905]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:23:46 np0005544708 python3.9[211028]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764797025.3866394-1415-234285563632014/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:23:47 np0005544708 python3.9[211180]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:23:47 np0005544708 systemd[1]: Reloading.
Dec  3 16:23:47 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:23:47 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:23:47 np0005544708 systemd[1]: Reached target edpm_libvirt.target.
Dec  3 16:23:47 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v516: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:23:48 np0005544708 python3.9[211370]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  3 16:23:48 np0005544708 systemd[1]: Reloading.
Dec  3 16:23:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:23:48.926 151937 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:23:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:23:48.928 151937 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:23:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:23:48.928 151937 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:23:48 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:23:48 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:23:49 np0005544708 systemd[1]: Reloading.
Dec  3 16:23:49 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:23:49 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:23:49 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v517: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:23:49 np0005544708 systemd[1]: session-48.scope: Deactivated successfully.
Dec  3 16:23:49 np0005544708 systemd[1]: session-48.scope: Consumed 4min 171ms CPU time.
Dec  3 16:23:49 np0005544708 systemd-logind[787]: Session 48 logged out. Waiting for processes to exit.
Dec  3 16:23:49 np0005544708 systemd-logind[787]: Removed session 48.
Dec  3 16:23:50 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:23:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:23:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:23:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:23:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:23:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:23:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:23:51 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v518: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:23:53 np0005544708 ceph-mon[75204]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Dec  3 16:23:53 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:23:53.846071) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 16:23:53 np0005544708 ceph-mon[75204]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Dec  3 16:23:53 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797033846176, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2044, "num_deletes": 251, "total_data_size": 2380796, "memory_usage": 2425616, "flush_reason": "Manual Compaction"}
Dec  3 16:23:53 np0005544708 ceph-mon[75204]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Dec  3 16:23:53 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797033865437, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 2308951, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9052, "largest_seqno": 11095, "table_properties": {"data_size": 2299745, "index_size": 5828, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17908, "raw_average_key_size": 19, "raw_value_size": 2281328, "raw_average_value_size": 2482, "num_data_blocks": 268, "num_entries": 919, "num_filter_entries": 919, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764796798, "oldest_key_time": 1764796798, "file_creation_time": 1764797033, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d83d641b-0db7-44b5-9540-349f4c36f664", "db_session_id": "YRQHTOJ9E78VAMDNI6U1", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Dec  3 16:23:53 np0005544708 ceph-mon[75204]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 19520 microseconds, and 10596 cpu microseconds.
Dec  3 16:23:53 np0005544708 ceph-mon[75204]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 16:23:53 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:23:53.865620) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 2308951 bytes OK
Dec  3 16:23:53 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:23:53.865669) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Dec  3 16:23:53 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:23:53.867047) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Dec  3 16:23:53 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:23:53.867061) EVENT_LOG_v1 {"time_micros": 1764797033867057, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 16:23:53 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:23:53.867079) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 16:23:53 np0005544708 ceph-mon[75204]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 2372261, prev total WAL file size 2372261, number of live WAL files 2.
Dec  3 16:23:53 np0005544708 ceph-mon[75204]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 16:23:53 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:23:53.867913) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Dec  3 16:23:53 np0005544708 ceph-mon[75204]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 16:23:53 np0005544708 ceph-mon[75204]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(2254KB)], [26(4750KB)]
Dec  3 16:23:53 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797033867969, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 7173959, "oldest_snapshot_seqno": -1}
Dec  3 16:23:53 np0005544708 ceph-mon[75204]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3208 keys, 6017410 bytes, temperature: kUnknown
Dec  3 16:23:53 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797033904614, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 6017410, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5991407, "index_size": 16869, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8069, "raw_key_size": 74048, "raw_average_key_size": 23, "raw_value_size": 5929471, "raw_average_value_size": 1848, "num_data_blocks": 745, "num_entries": 3208, "num_filter_entries": 3208, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764796079, "oldest_key_time": 0, "file_creation_time": 1764797033, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d83d641b-0db7-44b5-9540-349f4c36f664", "db_session_id": "YRQHTOJ9E78VAMDNI6U1", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Dec  3 16:23:53 np0005544708 ceph-mon[75204]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 16:23:53 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:23:53.904933) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 6017410 bytes
Dec  3 16:23:53 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:23:53.906249) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 195.2 rd, 163.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 4.6 +0.0 blob) out(5.7 +0.0 blob), read-write-amplify(5.7) write-amplify(2.6) OK, records in: 3722, records dropped: 514 output_compression: NoCompression
Dec  3 16:23:53 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:23:53.906269) EVENT_LOG_v1 {"time_micros": 1764797033906259, "job": 10, "event": "compaction_finished", "compaction_time_micros": 36750, "compaction_time_cpu_micros": 12925, "output_level": 6, "num_output_files": 1, "total_output_size": 6017410, "num_input_records": 3722, "num_output_records": 3208, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 16:23:53 np0005544708 ceph-mon[75204]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 16:23:53 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797033906918, "job": 10, "event": "table_file_deletion", "file_number": 28}
Dec  3 16:23:53 np0005544708 ceph-mon[75204]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 16:23:53 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797033907995, "job": 10, "event": "table_file_deletion", "file_number": 26}
Dec  3 16:23:53 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:23:53.867821) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:23:53 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:23:53.908066) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:23:53 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:23:53.908075) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:23:53 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:23:53.908077) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:23:53 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:23:53.908079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:23:53 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:23:53.908081) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:23:53 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v519: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:23:54 np0005544708 podman[211467]: 2025-12-03 21:23:54.113870178 +0000 UTC m=+0.057850992 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  3 16:23:55 np0005544708 systemd-logind[787]: New session 49 of user zuul.
Dec  3 16:23:55 np0005544708 systemd[1]: Started Session 49 of User zuul.
Dec  3 16:23:55 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:23:55 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v520: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:23:56 np0005544708 python3.9[211639]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 16:23:57 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v521: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:23:58 np0005544708 python3.9[211793]: ansible-ansible.builtin.service_facts Invoked
Dec  3 16:23:58 np0005544708 network[211810]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  3 16:23:58 np0005544708 network[211811]: 'network-scripts' will be removed from distribution in near future.
Dec  3 16:23:58 np0005544708 network[211812]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  3 16:23:59 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v522: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:24:00 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:24:01 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v523: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:24:03 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v524: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:24:05 np0005544708 python3.9[212084]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  3 16:24:05 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:24:05 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v525: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:24:06 np0005544708 python3.9[212168]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 16:24:07 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v526: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:24:09 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v527: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:24:10 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:24:11 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v528: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:24:12 np0005544708 podman[212170]: 2025-12-03 21:24:12.22591636 +0000 UTC m=+0.144023143 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  3 16:24:13 np0005544708 python3.9[212347]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:24:13 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v529: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:24:14 np0005544708 python3.9[212499]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:24:15 np0005544708 python3.9[212652]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:24:15 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:24:15 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v530: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:24:15 np0005544708 python3.9[212804]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:24:16 np0005544708 python3.9[212957]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:24:17 np0005544708 python3.9[213080]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764797056.2348127-95-238511596842489/.source.iscsi _original_basename=.nfh_tch2 follow=False checksum=9d1afd0835abf4d74a5804351287da0dac4ada05 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:24:17 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v531: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:24:18 np0005544708 python3.9[213232]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:24:19 np0005544708 python3.9[213384]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:24:19 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v532: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:24:20 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:24:20 np0005544708 python3.9[213536]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:24:20 np0005544708 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Dec  3 16:24:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Optimize plan auto_2025-12-03_21:24:21
Dec  3 16:24:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 16:24:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] do_upmap
Dec  3 16:24:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'images', 'backups', 'cephfs.cephfs.data', 'volumes', 'vms']
Dec  3 16:24:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] prepared 0/10 upmap changes
Dec  3 16:24:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:24:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:24:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:24:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:24:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:24:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:24:21 np0005544708 python3.9[213692]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:24:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 16:24:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 16:24:21 np0005544708 systemd[1]: Reloading.
Dec  3 16:24:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:24:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:24:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:24:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:24:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:24:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:24:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:24:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:24:21 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v533: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:24:21 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:24:21 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:24:22 np0005544708 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec  3 16:24:22 np0005544708 systemd[1]: Starting Open-iSCSI...
Dec  3 16:24:22 np0005544708 kernel: Loading iSCSI transport class v2.0-870.
Dec  3 16:24:22 np0005544708 systemd[1]: Started Open-iSCSI.
Dec  3 16:24:22 np0005544708 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Dec  3 16:24:22 np0005544708 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Dec  3 16:24:23 np0005544708 python3.9[213893]: ansible-ansible.builtin.service_facts Invoked
Dec  3 16:24:23 np0005544708 network[213910]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  3 16:24:23 np0005544708 network[213911]: 'network-scripts' will be removed from distribution in near future.
Dec  3 16:24:23 np0005544708 network[213912]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  3 16:24:23 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v534: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:24:24 np0005544708 podman[213918]: 2025-12-03 21:24:24.410231732 +0000 UTC m=+0.097675310 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  3 16:24:25 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:24:25 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v535: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:24:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 16:24:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:24:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 16:24:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:24:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:24:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:24:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:24:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:24:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:24:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:24:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:24:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:24:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.7344613959254126e-06 of space, bias 4.0, pg target 0.002081353675110495 quantized to 16 (current 16)
Dec  3 16:24:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:24:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:24:27 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v536: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:24:28 np0005544708 python3.9[214203]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  3 16:24:29 np0005544708 python3.9[214419]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Dec  3 16:24:29 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:24:29 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:24:29 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  3 16:24:29 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:24:29 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  3 16:24:29 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:24:29 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  3 16:24:29 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  3 16:24:29 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  3 16:24:29 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:24:29 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:24:29 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:24:29 np0005544708 podman[214655]: 2025-12-03 21:24:29.823828568 +0000 UTC m=+0.035202234 container create 2700e846cb0e746b5ffba3266810219a091f54642673dcfaebefe24e60bf0d05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec  3 16:24:29 np0005544708 systemd[1]: Started libpod-conmon-2700e846cb0e746b5ffba3266810219a091f54642673dcfaebefe24e60bf0d05.scope.
Dec  3 16:24:29 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:24:29 np0005544708 podman[214655]: 2025-12-03 21:24:29.90072888 +0000 UTC m=+0.112102576 container init 2700e846cb0e746b5ffba3266810219a091f54642673dcfaebefe24e60bf0d05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_pascal, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec  3 16:24:29 np0005544708 podman[214655]: 2025-12-03 21:24:29.809932866 +0000 UTC m=+0.021306552 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:24:29 np0005544708 podman[214655]: 2025-12-03 21:24:29.910314107 +0000 UTC m=+0.121687793 container start 2700e846cb0e746b5ffba3266810219a091f54642673dcfaebefe24e60bf0d05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_pascal, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:24:29 np0005544708 podman[214655]: 2025-12-03 21:24:29.914317244 +0000 UTC m=+0.125690930 container attach 2700e846cb0e746b5ffba3266810219a091f54642673dcfaebefe24e60bf0d05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_pascal, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  3 16:24:29 np0005544708 nifty_pascal[214671]: 167 167
Dec  3 16:24:29 np0005544708 systemd[1]: libpod-2700e846cb0e746b5ffba3266810219a091f54642673dcfaebefe24e60bf0d05.scope: Deactivated successfully.
Dec  3 16:24:29 np0005544708 conmon[214671]: conmon 2700e846cb0e746b5ffb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2700e846cb0e746b5ffba3266810219a091f54642673dcfaebefe24e60bf0d05.scope/container/memory.events
Dec  3 16:24:29 np0005544708 podman[214655]: 2025-12-03 21:24:29.91790385 +0000 UTC m=+0.129277526 container died 2700e846cb0e746b5ffba3266810219a091f54642673dcfaebefe24e60bf0d05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec  3 16:24:29 np0005544708 python3.9[214649]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:24:29 np0005544708 systemd[1]: var-lib-containers-storage-overlay-9184fec759360b935444954038a2257d11c5832a3f4335a686732fdc38766f64-merged.mount: Deactivated successfully.
Dec  3 16:24:29 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v537: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:24:29 np0005544708 podman[214655]: 2025-12-03 21:24:29.985824721 +0000 UTC m=+0.197198387 container remove 2700e846cb0e746b5ffba3266810219a091f54642673dcfaebefe24e60bf0d05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_pascal, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec  3 16:24:29 np0005544708 systemd[1]: libpod-conmon-2700e846cb0e746b5ffba3266810219a091f54642673dcfaebefe24e60bf0d05.scope: Deactivated successfully.
Dec  3 16:24:30 np0005544708 podman[214743]: 2025-12-03 21:24:30.155079878 +0000 UTC m=+0.040121087 container create ddeb1be2d42fa07bf3a64af65a0ad876c4706c98fc15478861e537af07000323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_ellis, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec  3 16:24:30 np0005544708 systemd[1]: Started libpod-conmon-ddeb1be2d42fa07bf3a64af65a0ad876c4706c98fc15478861e537af07000323.scope.
Dec  3 16:24:30 np0005544708 podman[214743]: 2025-12-03 21:24:30.135824432 +0000 UTC m=+0.020865731 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:24:30 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:24:30 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c14ffa41a42faca11da06325e461702d72714a994021796cbe7f7047bfdf80c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:24:30 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c14ffa41a42faca11da06325e461702d72714a994021796cbe7f7047bfdf80c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:24:30 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c14ffa41a42faca11da06325e461702d72714a994021796cbe7f7047bfdf80c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:24:30 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c14ffa41a42faca11da06325e461702d72714a994021796cbe7f7047bfdf80c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:24:30 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c14ffa41a42faca11da06325e461702d72714a994021796cbe7f7047bfdf80c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:24:30 np0005544708 podman[214743]: 2025-12-03 21:24:30.25848218 +0000 UTC m=+0.143523479 container init ddeb1be2d42fa07bf3a64af65a0ad876c4706c98fc15478861e537af07000323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_ellis, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:24:30 np0005544708 podman[214743]: 2025-12-03 21:24:30.273629036 +0000 UTC m=+0.158670255 container start ddeb1be2d42fa07bf3a64af65a0ad876c4706c98fc15478861e537af07000323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_ellis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  3 16:24:30 np0005544708 podman[214743]: 2025-12-03 21:24:30.277348075 +0000 UTC m=+0.162389374 container attach ddeb1be2d42fa07bf3a64af65a0ad876c4706c98fc15478861e537af07000323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_ellis, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:24:30 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:24:30 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:24:30 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:24:30 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:24:30 np0005544708 python3.9[214839]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764797069.5216115-172-16157622866746/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:24:30 np0005544708 elated_ellis[214784]: --> passed data devices: 0 physical, 3 LVM
Dec  3 16:24:30 np0005544708 elated_ellis[214784]: --> All data devices are unavailable
Dec  3 16:24:30 np0005544708 systemd[1]: libpod-ddeb1be2d42fa07bf3a64af65a0ad876c4706c98fc15478861e537af07000323.scope: Deactivated successfully.
Dec  3 16:24:30 np0005544708 podman[214743]: 2025-12-03 21:24:30.889926766 +0000 UTC m=+0.774968005 container died ddeb1be2d42fa07bf3a64af65a0ad876c4706c98fc15478861e537af07000323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_ellis, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True)
Dec  3 16:24:30 np0005544708 systemd[1]: var-lib-containers-storage-overlay-9c14ffa41a42faca11da06325e461702d72714a994021796cbe7f7047bfdf80c-merged.mount: Deactivated successfully.
Dec  3 16:24:30 np0005544708 podman[214743]: 2025-12-03 21:24:30.949502793 +0000 UTC m=+0.834544042 container remove ddeb1be2d42fa07bf3a64af65a0ad876c4706c98fc15478861e537af07000323 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_ellis, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 16:24:30 np0005544708 systemd[1]: libpod-conmon-ddeb1be2d42fa07bf3a64af65a0ad876c4706c98fc15478861e537af07000323.scope: Deactivated successfully.
Dec  3 16:24:31 np0005544708 podman[215084]: 2025-12-03 21:24:31.439060126 +0000 UTC m=+0.048416488 container create f87bfdbf7e7421a701c0824b394f1527a6b9444c8e5a2ec094b28ca1f95abfec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mahavira, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  3 16:24:31 np0005544708 python3.9[215069]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:24:31 np0005544708 systemd[1]: Started libpod-conmon-f87bfdbf7e7421a701c0824b394f1527a6b9444c8e5a2ec094b28ca1f95abfec.scope.
Dec  3 16:24:31 np0005544708 podman[215084]: 2025-12-03 21:24:31.416781069 +0000 UTC m=+0.026137461 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:24:31 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:24:31 np0005544708 podman[215084]: 2025-12-03 21:24:31.534260148 +0000 UTC m=+0.143616550 container init f87bfdbf7e7421a701c0824b394f1527a6b9444c8e5a2ec094b28ca1f95abfec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:24:31 np0005544708 podman[215084]: 2025-12-03 21:24:31.542094659 +0000 UTC m=+0.151451021 container start f87bfdbf7e7421a701c0824b394f1527a6b9444c8e5a2ec094b28ca1f95abfec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mahavira, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:24:31 np0005544708 practical_mahavira[215101]: 167 167
Dec  3 16:24:31 np0005544708 podman[215084]: 2025-12-03 21:24:31.54625984 +0000 UTC m=+0.155616202 container attach f87bfdbf7e7421a701c0824b394f1527a6b9444c8e5a2ec094b28ca1f95abfec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mahavira, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  3 16:24:31 np0005544708 systemd[1]: libpod-f87bfdbf7e7421a701c0824b394f1527a6b9444c8e5a2ec094b28ca1f95abfec.scope: Deactivated successfully.
Dec  3 16:24:31 np0005544708 podman[215084]: 2025-12-03 21:24:31.547877984 +0000 UTC m=+0.157234316 container died f87bfdbf7e7421a701c0824b394f1527a6b9444c8e5a2ec094b28ca1f95abfec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030)
Dec  3 16:24:31 np0005544708 systemd[1]: var-lib-containers-storage-overlay-e25b0c7b7c67dbdfb92d04122d56fc41309920fc0c6ad183e69e090cf4ee6ec2-merged.mount: Deactivated successfully.
Dec  3 16:24:31 np0005544708 podman[215084]: 2025-12-03 21:24:31.591922024 +0000 UTC m=+0.201278356 container remove f87bfdbf7e7421a701c0824b394f1527a6b9444c8e5a2ec094b28ca1f95abfec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mahavira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec  3 16:24:31 np0005544708 systemd[1]: libpod-conmon-f87bfdbf7e7421a701c0824b394f1527a6b9444c8e5a2ec094b28ca1f95abfec.scope: Deactivated successfully.
Dec  3 16:24:31 np0005544708 podman[215172]: 2025-12-03 21:24:31.767464349 +0000 UTC m=+0.041944415 container create f8de85093a0c5fb583fbd89587b2cb31d158126a8e95a31448bc1ef9d24e05dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_lalande, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:24:31 np0005544708 systemd[1]: Started libpod-conmon-f8de85093a0c5fb583fbd89587b2cb31d158126a8e95a31448bc1ef9d24e05dd.scope.
Dec  3 16:24:31 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:24:31 np0005544708 podman[215172]: 2025-12-03 21:24:31.749672803 +0000 UTC m=+0.024152889 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:24:31 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ffe2c4f160ba56d1b0e3e05a45f5170843345699830f772f2d14823fee7959/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:24:31 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ffe2c4f160ba56d1b0e3e05a45f5170843345699830f772f2d14823fee7959/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:24:31 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ffe2c4f160ba56d1b0e3e05a45f5170843345699830f772f2d14823fee7959/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:24:31 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ffe2c4f160ba56d1b0e3e05a45f5170843345699830f772f2d14823fee7959/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:24:31 np0005544708 podman[215172]: 2025-12-03 21:24:31.86709311 +0000 UTC m=+0.141573266 container init f8de85093a0c5fb583fbd89587b2cb31d158126a8e95a31448bc1ef9d24e05dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:24:31 np0005544708 podman[215172]: 2025-12-03 21:24:31.88724187 +0000 UTC m=+0.161721946 container start f8de85093a0c5fb583fbd89587b2cb31d158126a8e95a31448bc1ef9d24e05dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_lalande, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:24:31 np0005544708 podman[215172]: 2025-12-03 21:24:31.892321307 +0000 UTC m=+0.166801413 container attach f8de85093a0c5fb583fbd89587b2cb31d158126a8e95a31448bc1ef9d24e05dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_lalande, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Dec  3 16:24:31 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v538: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]: {
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:    "0": [
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:        {
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:            "devices": [
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "/dev/loop3"
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:            ],
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:            "lv_name": "ceph_lv0",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:            "lv_size": "21470642176",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d33bc95-baf8-481d-bc78-3b15ffd29872,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:            "lv_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:            "name": "ceph_lv0",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:            "tags": {
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.block_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.cluster_name": "ceph",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.crush_device_class": "",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.encrypted": "0",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.objectstore": "bluestore",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.osd_fsid": "4d33bc95-baf8-481d-bc78-3b15ffd29872",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.osd_id": "0",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.type": "block",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.vdo": "0",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.with_tpm": "0"
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:            },
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:            "type": "block",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:            "vg_name": "ceph_vg0"
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:        }
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:    ],
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:    "1": [
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:        {
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:            "devices": [
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "/dev/loop4"
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:            ],
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:            "lv_name": "ceph_lv1",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:            "lv_size": "21470642176",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c4086f1b-ff53-4e63-8dc0-011238d77976,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:            "lv_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:            "name": "ceph_lv1",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:            "tags": {
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.block_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.cluster_name": "ceph",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.crush_device_class": "",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.encrypted": "0",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.objectstore": "bluestore",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.osd_fsid": "c4086f1b-ff53-4e63-8dc0-011238d77976",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.osd_id": "1",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.type": "block",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.vdo": "0",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.with_tpm": "0"
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:            },
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:            "type": "block",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:            "vg_name": "ceph_vg1"
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:        }
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:    ],
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:    "2": [
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:        {
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:            "devices": [
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "/dev/loop5"
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:            ],
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:            "lv_name": "ceph_lv2",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:            "lv_size": "21470642176",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=abcd6a67-9013-4470-978f-f75da5f33cd4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:            "lv_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:            "name": "ceph_lv2",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:            "tags": {
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.block_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.cluster_name": "ceph",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.crush_device_class": "",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.encrypted": "0",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.objectstore": "bluestore",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.osd_fsid": "abcd6a67-9013-4470-978f-f75da5f33cd4",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.osd_id": "2",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.type": "block",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.vdo": "0",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:                "ceph.with_tpm": "0"
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:            },
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:            "type": "block",
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:            "vg_name": "ceph_vg2"
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:        }
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]:    ]
Dec  3 16:24:32 np0005544708 xenodochial_lalande[215218]: }
Dec  3 16:24:32 np0005544708 systemd[1]: libpod-f8de85093a0c5fb583fbd89587b2cb31d158126a8e95a31448bc1ef9d24e05dd.scope: Deactivated successfully.
Dec  3 16:24:32 np0005544708 podman[215172]: 2025-12-03 21:24:32.327854081 +0000 UTC m=+0.602334207 container died f8de85093a0c5fb583fbd89587b2cb31d158126a8e95a31448bc1ef9d24e05dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_lalande, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  3 16:24:32 np0005544708 systemd[1]: var-lib-containers-storage-overlay-55ffe2c4f160ba56d1b0e3e05a45f5170843345699830f772f2d14823fee7959-merged.mount: Deactivated successfully.
Dec  3 16:24:32 np0005544708 podman[215172]: 2025-12-03 21:24:32.385894327 +0000 UTC m=+0.660374403 container remove f8de85093a0c5fb583fbd89587b2cb31d158126a8e95a31448bc1ef9d24e05dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_lalande, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:24:32 np0005544708 systemd[1]: libpod-conmon-f8de85093a0c5fb583fbd89587b2cb31d158126a8e95a31448bc1ef9d24e05dd.scope: Deactivated successfully.
Dec  3 16:24:32 np0005544708 python3.9[215321]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 16:24:32 np0005544708 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec  3 16:24:32 np0005544708 systemd[1]: Stopped Load Kernel Modules.
Dec  3 16:24:32 np0005544708 systemd[1]: Stopping Load Kernel Modules...
Dec  3 16:24:32 np0005544708 systemd[1]: Starting Load Kernel Modules...
Dec  3 16:24:32 np0005544708 systemd[1]: Finished Load Kernel Modules.
Dec  3 16:24:32 np0005544708 podman[215381]: 2025-12-03 21:24:32.922206223 +0000 UTC m=+0.054591454 container create c190232fc00af2bc1147af138bde527fe09482963908d923b9e57469b29e93a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_franklin, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:24:32 np0005544708 systemd[1]: Started libpod-conmon-c190232fc00af2bc1147af138bde527fe09482963908d923b9e57469b29e93a9.scope.
Dec  3 16:24:32 np0005544708 podman[215381]: 2025-12-03 21:24:32.898247971 +0000 UTC m=+0.030633232 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:24:33 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:24:33 np0005544708 podman[215381]: 2025-12-03 21:24:33.025239985 +0000 UTC m=+0.157625226 container init c190232fc00af2bc1147af138bde527fe09482963908d923b9e57469b29e93a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_franklin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  3 16:24:33 np0005544708 podman[215381]: 2025-12-03 21:24:33.039697783 +0000 UTC m=+0.172083044 container start c190232fc00af2bc1147af138bde527fe09482963908d923b9e57469b29e93a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:24:33 np0005544708 podman[215381]: 2025-12-03 21:24:33.044154752 +0000 UTC m=+0.176540023 container attach c190232fc00af2bc1147af138bde527fe09482963908d923b9e57469b29e93a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:24:33 np0005544708 sad_franklin[215407]: 167 167
Dec  3 16:24:33 np0005544708 systemd[1]: libpod-c190232fc00af2bc1147af138bde527fe09482963908d923b9e57469b29e93a9.scope: Deactivated successfully.
Dec  3 16:24:33 np0005544708 podman[215381]: 2025-12-03 21:24:33.050128463 +0000 UTC m=+0.182513724 container died c190232fc00af2bc1147af138bde527fe09482963908d923b9e57469b29e93a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec  3 16:24:33 np0005544708 systemd[1]: var-lib-containers-storage-overlay-68a0c11c64e9b02087e15a2887daa7c0520c594e175e5ba2b54182511342c0ff-merged.mount: Deactivated successfully.
Dec  3 16:24:33 np0005544708 podman[215381]: 2025-12-03 21:24:33.091956944 +0000 UTC m=+0.224342195 container remove c190232fc00af2bc1147af138bde527fe09482963908d923b9e57469b29e93a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_franklin, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:24:33 np0005544708 systemd[1]: libpod-conmon-c190232fc00af2bc1147af138bde527fe09482963908d923b9e57469b29e93a9.scope: Deactivated successfully.
Dec  3 16:24:33 np0005544708 podman[215500]: 2025-12-03 21:24:33.317535561 +0000 UTC m=+0.066732570 container create 690f1e9ed32baf7c551787536e186d06d97832dc3ea908af675e59af5eabedd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec  3 16:24:33 np0005544708 systemd[1]: Started libpod-conmon-690f1e9ed32baf7c551787536e186d06d97832dc3ea908af675e59af5eabedd9.scope.
Dec  3 16:24:33 np0005544708 podman[215500]: 2025-12-03 21:24:33.291149483 +0000 UTC m=+0.040346582 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:24:33 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:24:33 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c24e5beff9d131a5da1e6812912f084dd81da2a5bc7b6348c5b20c87814489e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:24:33 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c24e5beff9d131a5da1e6812912f084dd81da2a5bc7b6348c5b20c87814489e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:24:33 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c24e5beff9d131a5da1e6812912f084dd81da2a5bc7b6348c5b20c87814489e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:24:33 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c24e5beff9d131a5da1e6812912f084dd81da2a5bc7b6348c5b20c87814489e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:24:33 np0005544708 podman[215500]: 2025-12-03 21:24:33.428748982 +0000 UTC m=+0.177946101 container init 690f1e9ed32baf7c551787536e186d06d97832dc3ea908af675e59af5eabedd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec  3 16:24:33 np0005544708 podman[215500]: 2025-12-03 21:24:33.448145902 +0000 UTC m=+0.197342951 container start 690f1e9ed32baf7c551787536e186d06d97832dc3ea908af675e59af5eabedd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec  3 16:24:33 np0005544708 podman[215500]: 2025-12-03 21:24:33.452737065 +0000 UTC m=+0.201934104 container attach 690f1e9ed32baf7c551787536e186d06d97832dc3ea908af675e59af5eabedd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_williams, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec  3 16:24:33 np0005544708 python3.9[215595]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:24:33 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v539: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:24:34 np0005544708 lvm[215722]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:24:34 np0005544708 lvm[215722]: VG ceph_vg0 finished
Dec  3 16:24:34 np0005544708 lvm[215725]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:24:34 np0005544708 lvm[215725]: VG ceph_vg1 finished
Dec  3 16:24:34 np0005544708 lvm[215731]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:24:34 np0005544708 lvm[215731]: VG ceph_vg2 finished
Dec  3 16:24:34 np0005544708 lvm[215747]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:24:34 np0005544708 lvm[215747]: VG ceph_vg0 finished
Dec  3 16:24:34 np0005544708 lvm[215751]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:24:34 np0005544708 lvm[215751]: VG ceph_vg2 finished
Dec  3 16:24:34 np0005544708 elegant_williams[215561]: {}
Dec  3 16:24:34 np0005544708 lvm[215760]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:24:34 np0005544708 lvm[215760]: VG ceph_vg2 finished
Dec  3 16:24:34 np0005544708 systemd[1]: libpod-690f1e9ed32baf7c551787536e186d06d97832dc3ea908af675e59af5eabedd9.scope: Deactivated successfully.
Dec  3 16:24:34 np0005544708 systemd[1]: libpod-690f1e9ed32baf7c551787536e186d06d97832dc3ea908af675e59af5eabedd9.scope: Consumed 1.397s CPU time.
Dec  3 16:24:34 np0005544708 podman[215500]: 2025-12-03 21:24:34.302323669 +0000 UTC m=+1.051520688 container died 690f1e9ed32baf7c551787536e186d06d97832dc3ea908af675e59af5eabedd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_williams, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  3 16:24:34 np0005544708 systemd[1]: var-lib-containers-storage-overlay-c24e5beff9d131a5da1e6812912f084dd81da2a5bc7b6348c5b20c87814489e8-merged.mount: Deactivated successfully.
Dec  3 16:24:34 np0005544708 podman[215500]: 2025-12-03 21:24:34.355544836 +0000 UTC m=+1.104741845 container remove 690f1e9ed32baf7c551787536e186d06d97832dc3ea908af675e59af5eabedd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_williams, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec  3 16:24:34 np0005544708 systemd[1]: libpod-conmon-690f1e9ed32baf7c551787536e186d06d97832dc3ea908af675e59af5eabedd9.scope: Deactivated successfully.
Dec  3 16:24:34 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:24:34 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:24:34 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:24:34 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:24:34 np0005544708 python3.9[215866]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:24:35 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:24:35 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:24:35 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:24:35 np0005544708 python3.9[216019]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:24:35 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v540: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:24:36 np0005544708 python3.9[216171]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:24:36 np0005544708 python3.9[216294]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764797075.7428403-230-184252749930175/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:24:37 np0005544708 python3.9[216446]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:24:37 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v541: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:24:38 np0005544708 python3.9[216599]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:24:39 np0005544708 python3.9[216751]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:24:39 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v542: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:24:40 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:24:40 np0005544708 python3.9[216903]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:24:41 np0005544708 python3.9[217055]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:24:41 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v543: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:24:42 np0005544708 python3.9[217207]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:24:42 np0005544708 podman[217331]: 2025-12-03 21:24:42.933241219 +0000 UTC m=+0.159348162 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  3 16:24:43 np0005544708 python3.9[217373]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:24:43 np0005544708 python3.9[217535]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:24:43 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v544: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:24:44 np0005544708 python3.9[217687]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:24:45 np0005544708 python3.9[217841]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:24:45 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:24:45 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v545: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:24:46 np0005544708 python3.9[217993]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:24:46 np0005544708 python3.9[218145]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:24:47 np0005544708 python3.9[218223]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:24:47 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v546: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:24:48 np0005544708 python3.9[218375]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:24:48 np0005544708 python3.9[218453]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:24:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:24:48.927 151937 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:24:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:24:48.927 151937 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:24:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:24:48.928 151937 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:24:49 np0005544708 python3.9[218605]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:24:49 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v547: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:24:50 np0005544708 python3.9[218757]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:24:50 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:24:50 np0005544708 python3.9[218835]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:24:51 np0005544708 python3.9[218987]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:24:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:24:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:24:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:24:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:24:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:24:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:24:51 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v548: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:24:52 np0005544708 python3.9[219065]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:24:52 np0005544708 python3.9[219217]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:24:53 np0005544708 systemd[1]: Reloading.
Dec  3 16:24:53 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:24:53 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:24:53 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v549: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:24:54 np0005544708 python3.9[219406]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:24:54 np0005544708 podman[219484]: 2025-12-03 21:24:54.579267802 +0000 UTC m=+0.063071971 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  3 16:24:54 np0005544708 python3.9[219485]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:24:55 np0005544708 python3.9[219655]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:24:55 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:24:55 np0005544708 python3.9[219733]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:24:55 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v550: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:24:56 np0005544708 python3.9[219885]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:24:56 np0005544708 systemd[1]: Reloading.
Dec  3 16:24:56 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:24:56 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:24:57 np0005544708 systemd[1]: Starting Create netns directory...
Dec  3 16:24:57 np0005544708 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  3 16:24:57 np0005544708 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  3 16:24:57 np0005544708 systemd[1]: Finished Create netns directory.
Dec  3 16:24:57 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v551: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:24:58 np0005544708 python3.9[220078]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:24:58 np0005544708 python3.9[220230]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:24:59 np0005544708 python3.9[220353]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764797098.2170405-437-85516893330854/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:24:59 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v552: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:25:00 np0005544708 python3.9[220505]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:25:00 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:25:01 np0005544708 python3.9[220657]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:25:01 np0005544708 python3.9[220780]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764797100.623288-462-128579134738981/.source.json _original_basename=.n3arkdfy follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:25:01 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v553: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:25:02 np0005544708 python3.9[220932]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:25:03 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v554: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:25:04 np0005544708 python3.9[221359]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Dec  3 16:25:05 np0005544708 systemd[1]: virtnodedevd.service: Deactivated successfully.
Dec  3 16:25:05 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:25:05 np0005544708 python3.9[221512]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  3 16:25:05 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v555: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:25:06 np0005544708 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  3 16:25:06 np0005544708 python3.9[221665]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  3 16:25:07 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v556: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:25:08 np0005544708 python3[221844]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  3 16:25:09 np0005544708 podman[221855]: 2025-12-03 21:25:09.695447327 +0000 UTC m=+1.069111689 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec  3 16:25:09 np0005544708 podman[221914]: 2025-12-03 21:25:09.860747348 +0000 UTC m=+0.070777018 container create 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Dec  3 16:25:09 np0005544708 podman[221914]: 2025-12-03 21:25:09.825092022 +0000 UTC m=+0.035121742 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec  3 16:25:09 np0005544708 python3[221844]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec  3 16:25:09 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v557: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:25:10 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:25:10 np0005544708 python3.9[222102]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:25:11 np0005544708 python3.9[222256]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:25:11 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v558: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:25:12 np0005544708 python3.9[222332]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:25:13 np0005544708 python3.9[222483]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764797112.4214425-550-92593012327089/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:25:13 np0005544708 podman[222484]: 2025-12-03 21:25:13.235392629 +0000 UTC m=+0.164311665 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec  3 16:25:13 np0005544708 python3.9[222585]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  3 16:25:13 np0005544708 systemd[1]: Reloading.
Dec  3 16:25:13 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:25:13 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:25:13 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v559: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:25:14 np0005544708 python3.9[222695]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:25:14 np0005544708 systemd[1]: Reloading.
Dec  3 16:25:15 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:25:15 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:25:15 np0005544708 systemd[1]: Starting multipathd container...
Dec  3 16:25:15 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:25:15 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9490031f965e3c8c8ec7bffa43f620db69dfaa1dd92d685b2e794e3ebc30139/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  3 16:25:15 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9490031f965e3c8c8ec7bffa43f620db69dfaa1dd92d685b2e794e3ebc30139/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  3 16:25:15 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:25:15 np0005544708 systemd[1]: Started /usr/bin/podman healthcheck run 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c.
Dec  3 16:25:15 np0005544708 podman[222735]: 2025-12-03 21:25:15.618697337 +0000 UTC m=+0.282709220 container init 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3)
Dec  3 16:25:15 np0005544708 multipathd[222750]: + sudo -E kolla_set_configs
Dec  3 16:25:15 np0005544708 podman[222735]: 2025-12-03 21:25:15.669731594 +0000 UTC m=+0.333743477 container start 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Dec  3 16:25:15 np0005544708 podman[222735]: multipathd
Dec  3 16:25:15 np0005544708 systemd[1]: Started multipathd container.
Dec  3 16:25:15 np0005544708 multipathd[222750]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  3 16:25:15 np0005544708 multipathd[222750]: INFO:__main__:Validating config file
Dec  3 16:25:15 np0005544708 multipathd[222750]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  3 16:25:15 np0005544708 multipathd[222750]: INFO:__main__:Writing out command to execute
Dec  3 16:25:15 np0005544708 multipathd[222750]: ++ cat /run_command
Dec  3 16:25:15 np0005544708 multipathd[222750]: + CMD='/usr/sbin/multipathd -d'
Dec  3 16:25:15 np0005544708 multipathd[222750]: + ARGS=
Dec  3 16:25:15 np0005544708 multipathd[222750]: + sudo kolla_copy_cacerts
Dec  3 16:25:15 np0005544708 podman[222757]: 2025-12-03 21:25:15.789030812 +0000 UTC m=+0.102229311 container health_status 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  3 16:25:15 np0005544708 multipathd[222750]: + [[ ! -n '' ]]
Dec  3 16:25:15 np0005544708 multipathd[222750]: + . kolla_extend_start
Dec  3 16:25:15 np0005544708 multipathd[222750]: Running command: '/usr/sbin/multipathd -d'
Dec  3 16:25:15 np0005544708 multipathd[222750]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec  3 16:25:15 np0005544708 multipathd[222750]: + umask 0022
Dec  3 16:25:15 np0005544708 multipathd[222750]: + exec /usr/sbin/multipathd -d
Dec  3 16:25:15 np0005544708 systemd[1]: 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c-2060e1ae219baa4e.service: Main process exited, code=exited, status=1/FAILURE
Dec  3 16:25:15 np0005544708 systemd[1]: 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c-2060e1ae219baa4e.service: Failed with result 'exit-code'.
Dec  3 16:25:15 np0005544708 multipathd[222750]: 3055.437631 | --------start up--------
Dec  3 16:25:15 np0005544708 multipathd[222750]: 3055.437649 | read /etc/multipath.conf
Dec  3 16:25:15 np0005544708 multipathd[222750]: 3055.446042 | path checkers start up
Dec  3 16:25:15 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v560: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:25:16 np0005544708 python3.9[222938]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:25:17 np0005544708 python3.9[223092]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:25:17 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v561: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:25:18 np0005544708 systemd[1]: virtqemud.service: Deactivated successfully.
Dec  3 16:25:18 np0005544708 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  3 16:25:18 np0005544708 python3.9[223257]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 16:25:18 np0005544708 systemd[1]: Stopping multipathd container...
Dec  3 16:25:18 np0005544708 multipathd[222750]: 3057.990949 | exit (signal)
Dec  3 16:25:18 np0005544708 multipathd[222750]: 3057.991058 | --------shut down-------
Dec  3 16:25:18 np0005544708 systemd[1]: libpod-2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c.scope: Deactivated successfully.
Dec  3 16:25:18 np0005544708 podman[223263]: 2025-12-03 21:25:18.391317479 +0000 UTC m=+0.077442937 container died 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  3 16:25:18 np0005544708 systemd[1]: 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c-2060e1ae219baa4e.timer: Deactivated successfully.
Dec  3 16:25:18 np0005544708 systemd[1]: Stopped /usr/bin/podman healthcheck run 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c.
Dec  3 16:25:18 np0005544708 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c-userdata-shm.mount: Deactivated successfully.
Dec  3 16:25:18 np0005544708 systemd[1]: var-lib-containers-storage-overlay-e9490031f965e3c8c8ec7bffa43f620db69dfaa1dd92d685b2e794e3ebc30139-merged.mount: Deactivated successfully.
Dec  3 16:25:18 np0005544708 podman[223263]: 2025-12-03 21:25:18.650152458 +0000 UTC m=+0.336277876 container cleanup 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 16:25:18 np0005544708 podman[223263]: multipathd
Dec  3 16:25:18 np0005544708 podman[223293]: multipathd
Dec  3 16:25:18 np0005544708 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Dec  3 16:25:18 np0005544708 systemd[1]: Stopped multipathd container.
Dec  3 16:25:18 np0005544708 systemd[1]: Starting multipathd container...
Dec  3 16:25:18 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:25:18 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9490031f965e3c8c8ec7bffa43f620db69dfaa1dd92d685b2e794e3ebc30139/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  3 16:25:18 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9490031f965e3c8c8ec7bffa43f620db69dfaa1dd92d685b2e794e3ebc30139/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  3 16:25:18 np0005544708 systemd[1]: Started /usr/bin/podman healthcheck run 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c.
Dec  3 16:25:18 np0005544708 podman[223306]: 2025-12-03 21:25:18.876190836 +0000 UTC m=+0.133292933 container init 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  3 16:25:18 np0005544708 multipathd[223321]: + sudo -E kolla_set_configs
Dec  3 16:25:18 np0005544708 podman[223306]: 2025-12-03 21:25:18.91324618 +0000 UTC m=+0.170348277 container start 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  3 16:25:18 np0005544708 podman[223306]: multipathd
Dec  3 16:25:18 np0005544708 systemd[1]: Started multipathd container.
Dec  3 16:25:18 np0005544708 multipathd[223321]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  3 16:25:18 np0005544708 multipathd[223321]: INFO:__main__:Validating config file
Dec  3 16:25:18 np0005544708 multipathd[223321]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  3 16:25:18 np0005544708 multipathd[223321]: INFO:__main__:Writing out command to execute
Dec  3 16:25:19 np0005544708 podman[223328]: 2025-12-03 21:25:19.007447925 +0000 UTC m=+0.082357528 container health_status 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true)
Dec  3 16:25:19 np0005544708 systemd[1]: 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c-31dea9158dd74dbc.service: Main process exited, code=exited, status=1/FAILURE
Dec  3 16:25:19 np0005544708 systemd[1]: 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c-31dea9158dd74dbc.service: Failed with result 'exit-code'.
Dec  3 16:25:19 np0005544708 multipathd[223321]: ++ cat /run_command
Dec  3 16:25:19 np0005544708 multipathd[223321]: + CMD='/usr/sbin/multipathd -d'
Dec  3 16:25:19 np0005544708 multipathd[223321]: + ARGS=
Dec  3 16:25:19 np0005544708 multipathd[223321]: + sudo kolla_copy_cacerts
Dec  3 16:25:19 np0005544708 multipathd[223321]: + [[ ! -n '' ]]
Dec  3 16:25:19 np0005544708 multipathd[223321]: + . kolla_extend_start
Dec  3 16:25:19 np0005544708 multipathd[223321]: Running command: '/usr/sbin/multipathd -d'
Dec  3 16:25:19 np0005544708 multipathd[223321]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec  3 16:25:19 np0005544708 multipathd[223321]: + umask 0022
Dec  3 16:25:19 np0005544708 multipathd[223321]: + exec /usr/sbin/multipathd -d
Dec  3 16:25:19 np0005544708 multipathd[223321]: 3058.691925 | --------start up--------
Dec  3 16:25:19 np0005544708 multipathd[223321]: 3058.691953 | read /etc/multipath.conf
Dec  3 16:25:19 np0005544708 multipathd[223321]: 3058.698324 | path checkers start up
Dec  3 16:25:19 np0005544708 python3.9[223510]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:25:19 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v562: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:25:20 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:25:20 np0005544708 python3.9[223662]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  3 16:25:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Optimize plan auto_2025-12-03_21:25:21
Dec  3 16:25:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 16:25:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] do_upmap
Dec  3 16:25:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] pools ['.mgr', 'vms', 'images', 'backups', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Dec  3 16:25:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] prepared 0/10 upmap changes
Dec  3 16:25:21 np0005544708 python3.9[223814]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Dec  3 16:25:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:25:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:25:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:25:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:25:21 np0005544708 kernel: Key type psk registered
Dec  3 16:25:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:25:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:25:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 16:25:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 16:25:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:25:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:25:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:25:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:25:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:25:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:25:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:25:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:25:21 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v563: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:25:22 np0005544708 python3.9[223976]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:25:23 np0005544708 python3.9[224099]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764797122.0979817-630-12011584822208/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:25:23 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v564: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:25:24 np0005544708 python3.9[224251]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:25:24 np0005544708 podman[224375]: 2025-12-03 21:25:24.902701204 +0000 UTC m=+0.087874227 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Dec  3 16:25:25 np0005544708 python3.9[224418]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 16:25:25 np0005544708 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec  3 16:25:25 np0005544708 systemd[1]: Stopped Load Kernel Modules.
Dec  3 16:25:25 np0005544708 systemd[1]: Stopping Load Kernel Modules...
Dec  3 16:25:25 np0005544708 systemd[1]: Starting Load Kernel Modules...
Dec  3 16:25:25 np0005544708 systemd[1]: Finished Load Kernel Modules.
Dec  3 16:25:25 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:25:25 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v565: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:25:26 np0005544708 python3.9[224578]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  3 16:25:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 16:25:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:25:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 16:25:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:25:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:25:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:25:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:25:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:25:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:25:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:25:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:25:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:25:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.7344613959254126e-06 of space, bias 4.0, pg target 0.002081353675110495 quantized to 16 (current 16)
Dec  3 16:25:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:25:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:25:27 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v566: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:25:28 np0005544708 systemd[1]: Reloading.
Dec  3 16:25:28 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:25:28 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:25:28 np0005544708 systemd[1]: Reloading.
Dec  3 16:25:28 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:25:28 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:25:29 np0005544708 systemd-logind[787]: Watching system buttons on /dev/input/event0 (Power Button)
Dec  3 16:25:29 np0005544708 systemd-logind[787]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec  3 16:25:29 np0005544708 lvm[224693]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:25:29 np0005544708 lvm[224691]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:25:29 np0005544708 lvm[224691]: VG ceph_vg1 finished
Dec  3 16:25:29 np0005544708 lvm[224693]: VG ceph_vg2 finished
Dec  3 16:25:29 np0005544708 lvm[224692]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:25:29 np0005544708 lvm[224692]: VG ceph_vg0 finished
Dec  3 16:25:29 np0005544708 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  3 16:25:29 np0005544708 systemd[1]: Starting man-db-cache-update.service...
Dec  3 16:25:29 np0005544708 systemd[1]: Reloading.
Dec  3 16:25:29 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:25:29 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:25:29 np0005544708 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  3 16:25:29 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v567: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:25:30 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:25:31 np0005544708 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  3 16:25:31 np0005544708 systemd[1]: Finished man-db-cache-update.service.
Dec  3 16:25:31 np0005544708 systemd[1]: man-db-cache-update.service: Consumed 1.912s CPU time.
Dec  3 16:25:31 np0005544708 python3.9[225940]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 16:25:31 np0005544708 systemd[1]: run-rba4d539b72704003985510d4409248f5.service: Deactivated successfully.
Dec  3 16:25:31 np0005544708 systemd[1]: Stopping Open-iSCSI...
Dec  3 16:25:31 np0005544708 iscsid[213732]: iscsid shutting down.
Dec  3 16:25:31 np0005544708 systemd[1]: iscsid.service: Deactivated successfully.
Dec  3 16:25:31 np0005544708 systemd[1]: Stopped Open-iSCSI.
Dec  3 16:25:31 np0005544708 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec  3 16:25:31 np0005544708 systemd[1]: Starting Open-iSCSI...
Dec  3 16:25:31 np0005544708 systemd[1]: Started Open-iSCSI.
Dec  3 16:25:31 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v568: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:25:32 np0005544708 python3.9[226191]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  3 16:25:33 np0005544708 python3.9[226347]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:25:33 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v569: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:25:34 np0005544708 python3.9[226499]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  3 16:25:34 np0005544708 systemd[1]: Reloading.
Dec  3 16:25:34 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:25:34 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:25:35 np0005544708 python3.9[226757]: ansible-ansible.builtin.service_facts Invoked
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:25:35.503802) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797135503820, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1007, "num_deletes": 251, "total_data_size": 1003566, "memory_usage": 1022512, "flush_reason": "Manual Compaction"}
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797135509548, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 607958, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11096, "largest_seqno": 12102, "table_properties": {"data_size": 604076, "index_size": 1534, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9761, "raw_average_key_size": 19, "raw_value_size": 595817, "raw_average_value_size": 1215, "num_data_blocks": 70, "num_entries": 490, "num_filter_entries": 490, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764797034, "oldest_key_time": 1764797034, "file_creation_time": 1764797135, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d83d641b-0db7-44b5-9540-349f4c36f664", "db_session_id": "YRQHTOJ9E78VAMDNI6U1", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 5795 microseconds, and 2138 cpu microseconds.
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:25:35.509596) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 607958 bytes OK
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:25:35.509608) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:25:35.510598) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:25:35.510611) EVENT_LOG_v1 {"time_micros": 1764797135510608, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:25:35.510624) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 998834, prev total WAL file size 998834, number of live WAL files 2.
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:25:35.510996) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353033' seq:0, type:0; will stop at (end)
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(593KB)], [29(5876KB)]
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797135511054, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 6625368, "oldest_snapshot_seqno": -1}
Dec  3 16:25:35 np0005544708 network[226831]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  3 16:25:35 np0005544708 network[226832]: 'network-scripts' will be removed from distribution in near future.
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 3229 keys, 4889473 bytes, temperature: kUnknown
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797135545854, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 4889473, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4866650, "index_size": 13626, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8133, "raw_key_size": 74733, "raw_average_key_size": 23, "raw_value_size": 4807579, "raw_average_value_size": 1488, "num_data_blocks": 606, "num_entries": 3229, "num_filter_entries": 3229, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764796079, "oldest_key_time": 0, "file_creation_time": 1764797135, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d83d641b-0db7-44b5-9540-349f4c36f664", "db_session_id": "YRQHTOJ9E78VAMDNI6U1", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 16:25:35 np0005544708 network[226833]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:25:35.546207) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 4889473 bytes
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:25:35.547490) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 189.3 rd, 139.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 5.7 +0.0 blob) out(4.7 +0.0 blob), read-write-amplify(18.9) write-amplify(8.0) OK, records in: 3698, records dropped: 469 output_compression: NoCompression
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:25:35.547509) EVENT_LOG_v1 {"time_micros": 1764797135547500, "job": 12, "event": "compaction_finished", "compaction_time_micros": 35007, "compaction_time_cpu_micros": 11475, "output_level": 6, "num_output_files": 1, "total_output_size": 4889473, "num_input_records": 3698, "num_output_records": 3229, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797135547768, "job": 12, "event": "table_file_deletion", "file_number": 31}
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797135548878, "job": 12, "event": "table_file_deletion", "file_number": 29}
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:25:35.510929) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:25:35.548928) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:25:35.548932) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:25:35.548934) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:25:35.548935) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:25:35.548937) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:25:35 np0005544708 podman[226851]: 2025-12-03 21:25:35.783548205 +0000 UTC m=+0.038132184 container create 17c7191304344d147fb67538dbb3d3e1603de65355d2470936f4019275967ce2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  3 16:25:35 np0005544708 podman[226851]: 2025-12-03 21:25:35.767173746 +0000 UTC m=+0.021757745 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:25:35 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:25:35 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v570: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:25:36 np0005544708 systemd[1]: Started libpod-conmon-17c7191304344d147fb67538dbb3d3e1603de65355d2470936f4019275967ce2.scope.
Dec  3 16:25:36 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:25:36 np0005544708 podman[226851]: 2025-12-03 21:25:36.521153617 +0000 UTC m=+0.775737646 container init 17c7191304344d147fb67538dbb3d3e1603de65355d2470936f4019275967ce2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lamport, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:25:36 np0005544708 podman[226851]: 2025-12-03 21:25:36.528967547 +0000 UTC m=+0.783551566 container start 17c7191304344d147fb67538dbb3d3e1603de65355d2470936f4019275967ce2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:25:36 np0005544708 podman[226851]: 2025-12-03 21:25:36.533370864 +0000 UTC m=+0.787954883 container attach 17c7191304344d147fb67538dbb3d3e1603de65355d2470936f4019275967ce2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec  3 16:25:36 np0005544708 dazzling_lamport[226868]: 167 167
Dec  3 16:25:36 np0005544708 systemd[1]: libpod-17c7191304344d147fb67538dbb3d3e1603de65355d2470936f4019275967ce2.scope: Deactivated successfully.
Dec  3 16:25:36 np0005544708 podman[226851]: 2025-12-03 21:25:36.538885992 +0000 UTC m=+0.793470011 container died 17c7191304344d147fb67538dbb3d3e1603de65355d2470936f4019275967ce2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lamport, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec  3 16:25:36 np0005544708 systemd[1]: var-lib-containers-storage-overlay-bb4f059f102988e17dc9cafdcbf454545ddc1eb398a5695e48549f2e5dd91efe-merged.mount: Deactivated successfully.
Dec  3 16:25:36 np0005544708 podman[226851]: 2025-12-03 21:25:36.585753219 +0000 UTC m=+0.840337218 container remove 17c7191304344d147fb67538dbb3d3e1603de65355d2470936f4019275967ce2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  3 16:25:36 np0005544708 systemd[1]: libpod-conmon-17c7191304344d147fb67538dbb3d3e1603de65355d2470936f4019275967ce2.scope: Deactivated successfully.
Dec  3 16:25:36 np0005544708 podman[226902]: 2025-12-03 21:25:36.817513781 +0000 UTC m=+0.052524269 container create 03a4f458781716710141b045e7777268b94c396267a5133e262d796ed7345cb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_jackson, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec  3 16:25:36 np0005544708 systemd[1]: Started libpod-conmon-03a4f458781716710141b045e7777268b94c396267a5133e262d796ed7345cb9.scope.
Dec  3 16:25:36 np0005544708 podman[226902]: 2025-12-03 21:25:36.794769622 +0000 UTC m=+0.029780110 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:25:36 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:25:36 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3fae31c25abede7f33f0ff9972968f33fe2cac4ef23cfc1479d7ceb982cc2dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:25:36 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3fae31c25abede7f33f0ff9972968f33fe2cac4ef23cfc1479d7ceb982cc2dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:25:36 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3fae31c25abede7f33f0ff9972968f33fe2cac4ef23cfc1479d7ceb982cc2dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:25:36 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3fae31c25abede7f33f0ff9972968f33fe2cac4ef23cfc1479d7ceb982cc2dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:25:36 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3fae31c25abede7f33f0ff9972968f33fe2cac4ef23cfc1479d7ceb982cc2dd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:25:36 np0005544708 podman[226902]: 2025-12-03 21:25:36.941373901 +0000 UTC m=+0.176384449 container init 03a4f458781716710141b045e7777268b94c396267a5133e262d796ed7345cb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_jackson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  3 16:25:36 np0005544708 podman[226902]: 2025-12-03 21:25:36.959417445 +0000 UTC m=+0.194427913 container start 03a4f458781716710141b045e7777268b94c396267a5133e262d796ed7345cb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_jackson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:25:36 np0005544708 podman[226902]: 2025-12-03 21:25:36.963816023 +0000 UTC m=+0.198826481 container attach 03a4f458781716710141b045e7777268b94c396267a5133e262d796ed7345cb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_jackson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:25:37 np0005544708 zealous_jackson[226922]: --> passed data devices: 0 physical, 3 LVM
Dec  3 16:25:37 np0005544708 zealous_jackson[226922]: --> All data devices are unavailable
Dec  3 16:25:37 np0005544708 systemd[1]: libpod-03a4f458781716710141b045e7777268b94c396267a5133e262d796ed7345cb9.scope: Deactivated successfully.
Dec  3 16:25:37 np0005544708 podman[226902]: 2025-12-03 21:25:37.590439631 +0000 UTC m=+0.825450119 container died 03a4f458781716710141b045e7777268b94c396267a5133e262d796ed7345cb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:25:37 np0005544708 systemd[1]: var-lib-containers-storage-overlay-a3fae31c25abede7f33f0ff9972968f33fe2cac4ef23cfc1479d7ceb982cc2dd-merged.mount: Deactivated successfully.
Dec  3 16:25:37 np0005544708 podman[226902]: 2025-12-03 21:25:37.644068028 +0000 UTC m=+0.879078486 container remove 03a4f458781716710141b045e7777268b94c396267a5133e262d796ed7345cb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_jackson, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  3 16:25:37 np0005544708 systemd[1]: libpod-conmon-03a4f458781716710141b045e7777268b94c396267a5133e262d796ed7345cb9.scope: Deactivated successfully.
Dec  3 16:25:38 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v571: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:25:38 np0005544708 podman[227061]: 2025-12-03 21:25:38.161846538 +0000 UTC m=+0.026098882 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:25:38 np0005544708 podman[227061]: 2025-12-03 21:25:38.375287578 +0000 UTC m=+0.239539932 container create c7aa61dbc70a324a770080fda4e73b1c6d478c1c8e978397b3761e62b6af4a06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:25:38 np0005544708 systemd[1]: Started libpod-conmon-c7aa61dbc70a324a770080fda4e73b1c6d478c1c8e978397b3761e62b6af4a06.scope.
Dec  3 16:25:38 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:25:38 np0005544708 podman[227061]: 2025-12-03 21:25:38.478140676 +0000 UTC m=+0.342393020 container init c7aa61dbc70a324a770080fda4e73b1c6d478c1c8e978397b3761e62b6af4a06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_nightingale, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  3 16:25:38 np0005544708 podman[227061]: 2025-12-03 21:25:38.488666608 +0000 UTC m=+0.352918972 container start c7aa61dbc70a324a770080fda4e73b1c6d478c1c8e978397b3761e62b6af4a06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_nightingale, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:25:38 np0005544708 podman[227061]: 2025-12-03 21:25:38.493334033 +0000 UTC m=+0.357586377 container attach c7aa61dbc70a324a770080fda4e73b1c6d478c1c8e978397b3761e62b6af4a06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_nightingale, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 16:25:38 np0005544708 competent_nightingale[227078]: 167 167
Dec  3 16:25:38 np0005544708 systemd[1]: libpod-c7aa61dbc70a324a770080fda4e73b1c6d478c1c8e978397b3761e62b6af4a06.scope: Deactivated successfully.
Dec  3 16:25:38 np0005544708 podman[227061]: 2025-12-03 21:25:38.495263865 +0000 UTC m=+0.359516189 container died c7aa61dbc70a324a770080fda4e73b1c6d478c1c8e978397b3761e62b6af4a06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_nightingale, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 16:25:38 np0005544708 systemd[1]: var-lib-containers-storage-overlay-7229ca75e3694cf335413c3542b79580ea79994f98c7a4406ebea057a6484a95-merged.mount: Deactivated successfully.
Dec  3 16:25:38 np0005544708 podman[227061]: 2025-12-03 21:25:38.531763934 +0000 UTC m=+0.396016258 container remove c7aa61dbc70a324a770080fda4e73b1c6d478c1c8e978397b3761e62b6af4a06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 16:25:38 np0005544708 systemd[1]: libpod-conmon-c7aa61dbc70a324a770080fda4e73b1c6d478c1c8e978397b3761e62b6af4a06.scope: Deactivated successfully.
Dec  3 16:25:38 np0005544708 podman[227102]: 2025-12-03 21:25:38.775830526 +0000 UTC m=+0.071377595 container create f00adc8f13a206823081b50863a61085dca880cfa828645163398fd6b8fb0161 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_mendel, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:25:38 np0005544708 systemd[1]: Started libpod-conmon-f00adc8f13a206823081b50863a61085dca880cfa828645163398fd6b8fb0161.scope.
Dec  3 16:25:38 np0005544708 podman[227102]: 2025-12-03 21:25:38.745691018 +0000 UTC m=+0.041238147 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:25:38 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:25:38 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c8c8d5c9d9bfffaf111fc96a4d44cc8766e6bfb1e71fe90751bff70b112b814/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:25:38 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c8c8d5c9d9bfffaf111fc96a4d44cc8766e6bfb1e71fe90751bff70b112b814/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:25:38 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c8c8d5c9d9bfffaf111fc96a4d44cc8766e6bfb1e71fe90751bff70b112b814/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:25:38 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c8c8d5c9d9bfffaf111fc96a4d44cc8766e6bfb1e71fe90751bff70b112b814/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:25:38 np0005544708 podman[227102]: 2025-12-03 21:25:38.869712792 +0000 UTC m=+0.165259831 container init f00adc8f13a206823081b50863a61085dca880cfa828645163398fd6b8fb0161 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_mendel, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec  3 16:25:38 np0005544708 podman[227102]: 2025-12-03 21:25:38.879525976 +0000 UTC m=+0.175073015 container start f00adc8f13a206823081b50863a61085dca880cfa828645163398fd6b8fb0161 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec  3 16:25:38 np0005544708 podman[227102]: 2025-12-03 21:25:38.882889785 +0000 UTC m=+0.178436824 container attach f00adc8f13a206823081b50863a61085dca880cfa828645163398fd6b8fb0161 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_mendel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3)
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]: {
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:    "0": [
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:        {
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:            "devices": [
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "/dev/loop3"
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:            ],
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:            "lv_name": "ceph_lv0",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:            "lv_size": "21470642176",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d33bc95-baf8-481d-bc78-3b15ffd29872,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:            "lv_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:            "name": "ceph_lv0",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:            "tags": {
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.block_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.cluster_name": "ceph",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.crush_device_class": "",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.encrypted": "0",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.objectstore": "bluestore",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.osd_fsid": "4d33bc95-baf8-481d-bc78-3b15ffd29872",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.osd_id": "0",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.type": "block",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.vdo": "0",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.with_tpm": "0"
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:            },
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:            "type": "block",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:            "vg_name": "ceph_vg0"
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:        }
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:    ],
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:    "1": [
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:        {
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:            "devices": [
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "/dev/loop4"
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:            ],
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:            "lv_name": "ceph_lv1",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:            "lv_size": "21470642176",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c4086f1b-ff53-4e63-8dc0-011238d77976,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:            "lv_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:            "name": "ceph_lv1",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:            "tags": {
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.block_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.cluster_name": "ceph",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.crush_device_class": "",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.encrypted": "0",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.objectstore": "bluestore",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.osd_fsid": "c4086f1b-ff53-4e63-8dc0-011238d77976",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.osd_id": "1",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.type": "block",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.vdo": "0",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.with_tpm": "0"
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:            },
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:            "type": "block",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:            "vg_name": "ceph_vg1"
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:        }
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:    ],
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:    "2": [
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:        {
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:            "devices": [
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "/dev/loop5"
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:            ],
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:            "lv_name": "ceph_lv2",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:            "lv_size": "21470642176",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=abcd6a67-9013-4470-978f-f75da5f33cd4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:            "lv_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:            "name": "ceph_lv2",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:            "tags": {
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.block_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.cluster_name": "ceph",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.crush_device_class": "",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.encrypted": "0",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.objectstore": "bluestore",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.osd_fsid": "abcd6a67-9013-4470-978f-f75da5f33cd4",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.osd_id": "2",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.type": "block",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.vdo": "0",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:                "ceph.with_tpm": "0"
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:            },
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:            "type": "block",
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:            "vg_name": "ceph_vg2"
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:        }
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]:    ]
Dec  3 16:25:39 np0005544708 interesting_mendel[227118]: }
Dec  3 16:25:39 np0005544708 podman[227102]: 2025-12-03 21:25:39.183666868 +0000 UTC m=+0.479213967 container died f00adc8f13a206823081b50863a61085dca880cfa828645163398fd6b8fb0161 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:25:39 np0005544708 systemd[1]: libpod-f00adc8f13a206823081b50863a61085dca880cfa828645163398fd6b8fb0161.scope: Deactivated successfully.
Dec  3 16:25:39 np0005544708 systemd[1]: var-lib-containers-storage-overlay-3c8c8d5c9d9bfffaf111fc96a4d44cc8766e6bfb1e71fe90751bff70b112b814-merged.mount: Deactivated successfully.
Dec  3 16:25:39 np0005544708 podman[227102]: 2025-12-03 21:25:39.309637034 +0000 UTC m=+0.605184113 container remove f00adc8f13a206823081b50863a61085dca880cfa828645163398fd6b8fb0161 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_mendel, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 16:25:39 np0005544708 systemd[1]: libpod-conmon-f00adc8f13a206823081b50863a61085dca880cfa828645163398fd6b8fb0161.scope: Deactivated successfully.
Dec  3 16:25:39 np0005544708 podman[227228]: 2025-12-03 21:25:39.853997787 +0000 UTC m=+0.063150764 container create 5be326ca7e262818c001620fb12e80572a9e602169197d5976c2358b9e828fc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_dirac, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec  3 16:25:39 np0005544708 systemd[1]: Started libpod-conmon-5be326ca7e262818c001620fb12e80572a9e602169197d5976c2358b9e828fc1.scope.
Dec  3 16:25:39 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:25:39 np0005544708 podman[227228]: 2025-12-03 21:25:39.828222837 +0000 UTC m=+0.037375844 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:25:39 np0005544708 podman[227228]: 2025-12-03 21:25:39.935955884 +0000 UTC m=+0.145108891 container init 5be326ca7e262818c001620fb12e80572a9e602169197d5976c2358b9e828fc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_dirac, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:25:39 np0005544708 podman[227228]: 2025-12-03 21:25:39.942998493 +0000 UTC m=+0.152151460 container start 5be326ca7e262818c001620fb12e80572a9e602169197d5976c2358b9e828fc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec  3 16:25:39 np0005544708 podman[227228]: 2025-12-03 21:25:39.946388354 +0000 UTC m=+0.155541321 container attach 5be326ca7e262818c001620fb12e80572a9e602169197d5976c2358b9e828fc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_dirac, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  3 16:25:39 np0005544708 awesome_dirac[227251]: 167 167
Dec  3 16:25:39 np0005544708 systemd[1]: libpod-5be326ca7e262818c001620fb12e80572a9e602169197d5976c2358b9e828fc1.scope: Deactivated successfully.
Dec  3 16:25:39 np0005544708 podman[227228]: 2025-12-03 21:25:39.950699419 +0000 UTC m=+0.159852446 container died 5be326ca7e262818c001620fb12e80572a9e602169197d5976c2358b9e828fc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_dirac, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True)
Dec  3 16:25:39 np0005544708 systemd[1]: var-lib-containers-storage-overlay-74bb4123efd354ab8c43b330715d654bdcdbce51d270add67dfe31afbb4776d5-merged.mount: Deactivated successfully.
Dec  3 16:25:39 np0005544708 podman[227228]: 2025-12-03 21:25:39.993651971 +0000 UTC m=+0.202804908 container remove 5be326ca7e262818c001620fb12e80572a9e602169197d5976c2358b9e828fc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_dirac, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  3 16:25:40 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v572: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:25:40 np0005544708 systemd[1]: libpod-conmon-5be326ca7e262818c001620fb12e80572a9e602169197d5976c2358b9e828fc1.scope: Deactivated successfully.
Dec  3 16:25:40 np0005544708 podman[227285]: 2025-12-03 21:25:40.243872508 +0000 UTC m=+0.068838996 container create 3c86059e20a266c6a0dea653ebaaa3ad3fe58d4fd2a287912c12c94678d97afb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_edison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:25:40 np0005544708 systemd[1]: Started libpod-conmon-3c86059e20a266c6a0dea653ebaaa3ad3fe58d4fd2a287912c12c94678d97afb.scope.
Dec  3 16:25:40 np0005544708 podman[227285]: 2025-12-03 21:25:40.213101693 +0000 UTC m=+0.038068201 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:25:40 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:25:40 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baf85d79994d5aab1075fb823d7ed21f36245054ef83308b102d15d062934c9e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:25:40 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baf85d79994d5aab1075fb823d7ed21f36245054ef83308b102d15d062934c9e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:25:40 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baf85d79994d5aab1075fb823d7ed21f36245054ef83308b102d15d062934c9e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:25:40 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baf85d79994d5aab1075fb823d7ed21f36245054ef83308b102d15d062934c9e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:25:40 np0005544708 podman[227285]: 2025-12-03 21:25:40.358158802 +0000 UTC m=+0.183125310 container init 3c86059e20a266c6a0dea653ebaaa3ad3fe58d4fd2a287912c12c94678d97afb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_edison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  3 16:25:40 np0005544708 podman[227285]: 2025-12-03 21:25:40.380182912 +0000 UTC m=+0.205149410 container start 3c86059e20a266c6a0dea653ebaaa3ad3fe58d4fd2a287912c12c94678d97afb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_edison, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:25:40 np0005544708 podman[227285]: 2025-12-03 21:25:40.385408752 +0000 UTC m=+0.210375300 container attach 3c86059e20a266c6a0dea653ebaaa3ad3fe58d4fd2a287912c12c94678d97afb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_edison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 16:25:40 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:25:41 np0005544708 lvm[227493]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:25:41 np0005544708 lvm[227493]: VG ceph_vg1 finished
Dec  3 16:25:41 np0005544708 lvm[227492]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:25:41 np0005544708 lvm[227492]: VG ceph_vg0 finished
Dec  3 16:25:41 np0005544708 lvm[227512]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:25:41 np0005544708 lvm[227512]: VG ceph_vg2 finished
Dec  3 16:25:41 np0005544708 crazy_edison[227306]: {}
Dec  3 16:25:41 np0005544708 systemd[1]: libpod-3c86059e20a266c6a0dea653ebaaa3ad3fe58d4fd2a287912c12c94678d97afb.scope: Deactivated successfully.
Dec  3 16:25:41 np0005544708 systemd[1]: libpod-3c86059e20a266c6a0dea653ebaaa3ad3fe58d4fd2a287912c12c94678d97afb.scope: Consumed 1.360s CPU time.
Dec  3 16:25:41 np0005544708 podman[227285]: 2025-12-03 21:25:41.314898788 +0000 UTC m=+1.139865286 container died 3c86059e20a266c6a0dea653ebaaa3ad3fe58d4fd2a287912c12c94678d97afb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_edison, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec  3 16:25:41 np0005544708 systemd[1]: var-lib-containers-storage-overlay-baf85d79994d5aab1075fb823d7ed21f36245054ef83308b102d15d062934c9e-merged.mount: Deactivated successfully.
Dec  3 16:25:41 np0005544708 podman[227285]: 2025-12-03 21:25:41.362648358 +0000 UTC m=+1.187614856 container remove 3c86059e20a266c6a0dea653ebaaa3ad3fe58d4fd2a287912c12c94678d97afb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_edison, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:25:41 np0005544708 systemd[1]: libpod-conmon-3c86059e20a266c6a0dea653ebaaa3ad3fe58d4fd2a287912c12c94678d97afb.scope: Deactivated successfully.
Dec  3 16:25:41 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:25:41 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:25:41 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:25:41 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:25:41 np0005544708 python3.9[227549]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:25:41 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:25:41 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:25:42 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v573: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:25:42 np0005544708 python3.9[227740]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:25:43 np0005544708 python3.9[227893]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:25:43 np0005544708 podman[227895]: 2025-12-03 21:25:43.514387307 +0000 UTC m=+0.122754861 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  3 16:25:44 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v574: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:25:44 np0005544708 python3.9[228072]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:25:45 np0005544708 python3.9[228225]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:25:45 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:25:46 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v575: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:25:46 np0005544708 python3.9[228378]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:25:46 np0005544708 python3.9[228531]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:25:47 np0005544708 python3.9[228684]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:25:48 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v576: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:25:48 np0005544708 python3.9[228837]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:25:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:25:48.928 151937 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:25:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:25:48.929 151937 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:25:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:25:48.929 151937 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:25:49 np0005544708 podman[228914]: 2025-12-03 21:25:49.131735306 +0000 UTC m=+0.062451685 container health_status 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec  3 16:25:49 np0005544708 python3.9[229010]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:25:50 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v577: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:25:50 np0005544708 python3.9[229162]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:25:50 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:25:51 np0005544708 python3.9[229314]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:25:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:25:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:25:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:25:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:25:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:25:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:25:51 np0005544708 python3.9[229466]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:25:52 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v578: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:25:52 np0005544708 python3.9[229618]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:25:53 np0005544708 python3.9[229770]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:25:54 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v579: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:25:54 np0005544708 python3.9[229922]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:25:54 np0005544708 python3.9[230074]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:25:55 np0005544708 podman[230099]: 2025-12-03 21:25:55.132244217 +0000 UTC m=+0.068996661 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  3 16:25:55 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:25:55 np0005544708 python3.9[230243]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:25:56 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v580: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:25:56 np0005544708 python3.9[230395]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:25:57 np0005544708 python3.9[230547]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:25:57 np0005544708 python3.9[230699]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:25:58 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v581: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:25:58 np0005544708 python3.9[230851]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:25:59 np0005544708 python3.9[231003]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:26:00 np0005544708 python3.9[231155]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:26:00 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v582: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:26:00 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:26:00 np0005544708 python3.9[231307]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:26:01 np0005544708 python3.9[231459]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  3 16:26:02 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v583: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:26:02 np0005544708 python3.9[231611]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  3 16:26:02 np0005544708 systemd[1]: Reloading.
Dec  3 16:26:03 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:26:03 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:26:04 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v584: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:26:04 np0005544708 python3.9[231798]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:26:04 np0005544708 python3.9[231951]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:26:05 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:26:05 np0005544708 python3.9[232104]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:26:06 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v585: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:26:06 np0005544708 python3.9[232257]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:26:07 np0005544708 python3.9[232410]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:26:08 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v586: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:26:08 np0005544708 python3.9[232563]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:26:08 np0005544708 python3.9[232716]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:26:09 np0005544708 python3.9[232869]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  3 16:26:10 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v587: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:26:10 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:26:12 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v588: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:26:12 np0005544708 python3.9[233022]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:26:13 np0005544708 python3.9[233174]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:26:13 np0005544708 podman[233298]: 2025-12-03 21:26:13.84311763 +0000 UTC m=+0.144307070 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  3 16:26:14 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v589: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:26:14 np0005544708 python3.9[233337]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:26:14 np0005544708 python3.9[233502]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:26:15 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:26:15 np0005544708 python3.9[233654]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:26:16 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v590: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:26:16 np0005544708 python3.9[233806]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:26:17 np0005544708 python3.9[233958]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:26:18 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v591: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:26:18 np0005544708 python3.9[234110]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:26:18 np0005544708 python3.9[234262]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:26:19 np0005544708 podman[234386]: 2025-12-03 21:26:19.642277729 +0000 UTC m=+0.089074377 container health_status 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  3 16:26:19 np0005544708 python3.9[234430]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:26:20 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v592: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:26:20 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:26:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Optimize plan auto_2025-12-03_21:26:21
Dec  3 16:26:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 16:26:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] do_upmap
Dec  3 16:26:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'vms', 'volumes', 'images', 'backups', 'cephfs.cephfs.data']
Dec  3 16:26:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] prepared 0/10 upmap changes
Dec  3 16:26:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:26:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:26:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:26:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:26:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:26:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:26:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 16:26:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:26:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 16:26:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:26:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:26:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:26:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:26:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:26:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:26:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:26:22 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v593: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:26:24 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v594: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:26:25 np0005544708 podman[234557]: 2025-12-03 21:26:25.373239364 +0000 UTC m=+0.081277338 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 16:26:25 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:26:25 np0005544708 python3.9[234601]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Dec  3 16:26:26 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v595: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:26:26 np0005544708 python3.9[234754]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  3 16:26:27 np0005544708 python3.9[234912]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  3 16:26:27 np0005544708 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 16:26:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 16:26:27 np0005544708 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 16:26:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:26:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 16:26:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:26:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:26:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:26:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:26:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:26:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:26:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:26:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:26:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:26:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.7344613959254126e-06 of space, bias 4.0, pg target 0.002081353675110495 quantized to 16 (current 16)
Dec  3 16:26:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:26:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:26:28 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v596: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:26:28 np0005544708 systemd-logind[787]: New session 50 of user zuul.
Dec  3 16:26:28 np0005544708 systemd[1]: Started Session 50 of User zuul.
Dec  3 16:26:28 np0005544708 systemd[1]: session-50.scope: Deactivated successfully.
Dec  3 16:26:28 np0005544708 systemd-logind[787]: Session 50 logged out. Waiting for processes to exit.
Dec  3 16:26:28 np0005544708 systemd-logind[787]: Removed session 50.
Dec  3 16:26:29 np0005544708 python3.9[235099]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:26:30 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v597: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:26:30 np0005544708 python3.9[235220]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764797189.1395714-1249-110186999234054/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:26:30 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:26:30 np0005544708 python3.9[235370]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:26:31 np0005544708 python3.9[235446]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:26:32 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v598: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:26:32 np0005544708 python3.9[235596]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:26:32 np0005544708 python3.9[235717]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764797191.6330242-1249-84863402190052/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:26:33 np0005544708 python3.9[235867]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:26:34 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v599: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:26:34 np0005544708 python3.9[235988]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764797193.017465-1249-148636484996121/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:26:35 np0005544708 python3.9[236138]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:26:35 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:26:35 np0005544708 python3.9[236259]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764797194.4852717-1249-120755419875994/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:26:36 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v600: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:26:36 np0005544708 python3.9[236409]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:26:37 np0005544708 python3.9[236530]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764797195.8696108-1249-260822366056509/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:26:37 np0005544708 python3.9[236682]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:26:38 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v601: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:26:38 np0005544708 python3.9[236834]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:26:39 np0005544708 python3.9[236986]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:26:40 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v602: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:26:40 np0005544708 python3.9[237138]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:26:40 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:26:41 np0005544708 python3.9[237261]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764797199.8087378-1356-222310726188555/.source _original_basename=.4c23dsph follow=False checksum=19000a86675532e29c6f41be5522228461db82b1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Dec  3 16:26:42 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v603: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:26:42 np0005544708 python3.9[237465]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:26:42 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:26:42 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:26:42 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  3 16:26:42 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:26:42 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  3 16:26:42 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:26:42 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  3 16:26:42 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  3 16:26:42 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  3 16:26:42 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:26:42 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:26:42 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:26:42 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:26:42 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:26:42 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:26:42 np0005544708 podman[237709]: 2025-12-03 21:26:42.981242366 +0000 UTC m=+0.067141450 container create d77dad3f9996074bb99c1064362e954e3a1d3a2b44a7840162b68c78b8d592f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_dirac, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:26:42 np0005544708 python3.9[237697]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:26:43 np0005544708 systemd[1]: Started libpod-conmon-d77dad3f9996074bb99c1064362e954e3a1d3a2b44a7840162b68c78b8d592f3.scope.
Dec  3 16:26:43 np0005544708 podman[237709]: 2025-12-03 21:26:42.952836635 +0000 UTC m=+0.038735749 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:26:43 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:26:43 np0005544708 podman[237709]: 2025-12-03 21:26:43.083547097 +0000 UTC m=+0.169446231 container init d77dad3f9996074bb99c1064362e954e3a1d3a2b44a7840162b68c78b8d592f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  3 16:26:43 np0005544708 podman[237709]: 2025-12-03 21:26:43.094491681 +0000 UTC m=+0.180390725 container start d77dad3f9996074bb99c1064362e954e3a1d3a2b44a7840162b68c78b8d592f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_dirac, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 16:26:43 np0005544708 podman[237709]: 2025-12-03 21:26:43.097719737 +0000 UTC m=+0.183618871 container attach d77dad3f9996074bb99c1064362e954e3a1d3a2b44a7840162b68c78b8d592f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_dirac, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  3 16:26:43 np0005544708 musing_dirac[237725]: 167 167
Dec  3 16:26:43 np0005544708 systemd[1]: libpod-d77dad3f9996074bb99c1064362e954e3a1d3a2b44a7840162b68c78b8d592f3.scope: Deactivated successfully.
Dec  3 16:26:43 np0005544708 podman[237709]: 2025-12-03 21:26:43.103222545 +0000 UTC m=+0.189121619 container died d77dad3f9996074bb99c1064362e954e3a1d3a2b44a7840162b68c78b8d592f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_dirac, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  3 16:26:43 np0005544708 systemd[1]: var-lib-containers-storage-overlay-c8921e90701ad8fbdfc9cd7ea4b160d5415373e4d516ddcaaa4c2d52ee25d39e-merged.mount: Deactivated successfully.
Dec  3 16:26:43 np0005544708 podman[237709]: 2025-12-03 21:26:43.158194117 +0000 UTC m=+0.244093171 container remove d77dad3f9996074bb99c1064362e954e3a1d3a2b44a7840162b68c78b8d592f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_dirac, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec  3 16:26:43 np0005544708 systemd[1]: libpod-conmon-d77dad3f9996074bb99c1064362e954e3a1d3a2b44a7840162b68c78b8d592f3.scope: Deactivated successfully.
Dec  3 16:26:43 np0005544708 podman[237817]: 2025-12-03 21:26:43.343965304 +0000 UTC m=+0.046243990 container create a4fe968f85186fcb66f796d63a65a264e4340847bb7eee2ff2803200b69a1da2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec  3 16:26:43 np0005544708 systemd[1]: Started libpod-conmon-a4fe968f85186fcb66f796d63a65a264e4340847bb7eee2ff2803200b69a1da2.scope.
Dec  3 16:26:43 np0005544708 podman[237817]: 2025-12-03 21:26:43.325437887 +0000 UTC m=+0.027716583 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:26:43 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:26:43 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d18325dfb2044c7dc4e353f3ff08951d4bc5e2fb9d5fe1c1827e2b1f47c9405/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:26:43 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d18325dfb2044c7dc4e353f3ff08951d4bc5e2fb9d5fe1c1827e2b1f47c9405/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:26:43 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d18325dfb2044c7dc4e353f3ff08951d4bc5e2fb9d5fe1c1827e2b1f47c9405/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:26:43 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d18325dfb2044c7dc4e353f3ff08951d4bc5e2fb9d5fe1c1827e2b1f47c9405/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:26:43 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d18325dfb2044c7dc4e353f3ff08951d4bc5e2fb9d5fe1c1827e2b1f47c9405/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:26:43 np0005544708 podman[237817]: 2025-12-03 21:26:43.440939181 +0000 UTC m=+0.143217947 container init a4fe968f85186fcb66f796d63a65a264e4340847bb7eee2ff2803200b69a1da2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_chatelet, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  3 16:26:43 np0005544708 podman[237817]: 2025-12-03 21:26:43.452250564 +0000 UTC m=+0.154529280 container start a4fe968f85186fcb66f796d63a65a264e4340847bb7eee2ff2803200b69a1da2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:26:43 np0005544708 podman[237817]: 2025-12-03 21:26:43.457253659 +0000 UTC m=+0.159532365 container attach a4fe968f85186fcb66f796d63a65a264e4340847bb7eee2ff2803200b69a1da2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:26:43 np0005544708 python3.9[237891]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764797202.4094832-1382-85524402632217/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:26:43 np0005544708 romantic_chatelet[237860]: --> passed data devices: 0 physical, 3 LVM
Dec  3 16:26:43 np0005544708 romantic_chatelet[237860]: --> All data devices are unavailable
Dec  3 16:26:43 np0005544708 systemd[1]: libpod-a4fe968f85186fcb66f796d63a65a264e4340847bb7eee2ff2803200b69a1da2.scope: Deactivated successfully.
Dec  3 16:26:43 np0005544708 podman[237817]: 2025-12-03 21:26:43.973443297 +0000 UTC m=+0.675721993 container died a4fe968f85186fcb66f796d63a65a264e4340847bb7eee2ff2803200b69a1da2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_chatelet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030)
Dec  3 16:26:44 np0005544708 systemd[1]: var-lib-containers-storage-overlay-4d18325dfb2044c7dc4e353f3ff08951d4bc5e2fb9d5fe1c1827e2b1f47c9405-merged.mount: Deactivated successfully.
Dec  3 16:26:44 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v604: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:26:44 np0005544708 podman[237817]: 2025-12-03 21:26:44.038631564 +0000 UTC m=+0.740910250 container remove a4fe968f85186fcb66f796d63a65a264e4340847bb7eee2ff2803200b69a1da2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_chatelet, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec  3 16:26:44 np0005544708 systemd[1]: libpod-conmon-a4fe968f85186fcb66f796d63a65a264e4340847bb7eee2ff2803200b69a1da2.scope: Deactivated successfully.
Dec  3 16:26:44 np0005544708 podman[237987]: 2025-12-03 21:26:44.149901895 +0000 UTC m=+0.129116220 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  3 16:26:44 np0005544708 python3.9[238126]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  3 16:26:44 np0005544708 podman[238200]: 2025-12-03 21:26:44.631277261 +0000 UTC m=+0.069724409 container create 69f80d291e77d52c327aadb75789cd451f94f8a04673125305795bc8e133ed62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 16:26:44 np0005544708 systemd[1]: Started libpod-conmon-69f80d291e77d52c327aadb75789cd451f94f8a04673125305795bc8e133ed62.scope.
Dec  3 16:26:44 np0005544708 podman[238200]: 2025-12-03 21:26:44.602285525 +0000 UTC m=+0.040732723 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:26:44 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:26:44 np0005544708 podman[238200]: 2025-12-03 21:26:44.734183178 +0000 UTC m=+0.172630376 container init 69f80d291e77d52c327aadb75789cd451f94f8a04673125305795bc8e133ed62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hodgkin, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec  3 16:26:44 np0005544708 podman[238200]: 2025-12-03 21:26:44.743440426 +0000 UTC m=+0.181887564 container start 69f80d291e77d52c327aadb75789cd451f94f8a04673125305795bc8e133ed62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hodgkin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:26:44 np0005544708 podman[238200]: 2025-12-03 21:26:44.747404683 +0000 UTC m=+0.185851841 container attach 69f80d291e77d52c327aadb75789cd451f94f8a04673125305795bc8e133ed62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:26:44 np0005544708 serene_hodgkin[238240]: 167 167
Dec  3 16:26:44 np0005544708 systemd[1]: libpod-69f80d291e77d52c327aadb75789cd451f94f8a04673125305795bc8e133ed62.scope: Deactivated successfully.
Dec  3 16:26:44 np0005544708 podman[238200]: 2025-12-03 21:26:44.751068821 +0000 UTC m=+0.189515959 container died 69f80d291e77d52c327aadb75789cd451f94f8a04673125305795bc8e133ed62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hodgkin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:26:44 np0005544708 systemd[1]: var-lib-containers-storage-overlay-d82c22506a8c547adba70ba6cb52c438dba43b32a1543d84cfbab1f460fa50fb-merged.mount: Deactivated successfully.
Dec  3 16:26:44 np0005544708 podman[238200]: 2025-12-03 21:26:44.797178865 +0000 UTC m=+0.235625983 container remove 69f80d291e77d52c327aadb75789cd451f94f8a04673125305795bc8e133ed62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 16:26:44 np0005544708 systemd[1]: libpod-conmon-69f80d291e77d52c327aadb75789cd451f94f8a04673125305795bc8e133ed62.scope: Deactivated successfully.
Dec  3 16:26:45 np0005544708 podman[238314]: 2025-12-03 21:26:45.007297284 +0000 UTC m=+0.062127975 container create 8337d83a3b20511f9491db1eff44f848c238f9aca2ca371d00ed0fe9e00d41dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_khayyam, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:26:45 np0005544708 python3.9[238307]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764797203.8662555-1397-75248352782967/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  3 16:26:45 np0005544708 systemd[1]: Started libpod-conmon-8337d83a3b20511f9491db1eff44f848c238f9aca2ca371d00ed0fe9e00d41dc.scope.
Dec  3 16:26:45 np0005544708 podman[238314]: 2025-12-03 21:26:44.976479649 +0000 UTC m=+0.031310350 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:26:45 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:26:45 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68876c974428ba1116d946dddf9b9b3f5eca22e7b7151ed243fd911faa852b87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:26:45 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68876c974428ba1116d946dddf9b9b3f5eca22e7b7151ed243fd911faa852b87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:26:45 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68876c974428ba1116d946dddf9b9b3f5eca22e7b7151ed243fd911faa852b87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:26:45 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68876c974428ba1116d946dddf9b9b3f5eca22e7b7151ed243fd911faa852b87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:26:45 np0005544708 podman[238314]: 2025-12-03 21:26:45.128971874 +0000 UTC m=+0.183802605 container init 8337d83a3b20511f9491db1eff44f848c238f9aca2ca371d00ed0fe9e00d41dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:26:45 np0005544708 podman[238314]: 2025-12-03 21:26:45.141655964 +0000 UTC m=+0.196486625 container start 8337d83a3b20511f9491db1eff44f848c238f9aca2ca371d00ed0fe9e00d41dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_khayyam, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:26:45 np0005544708 podman[238314]: 2025-12-03 21:26:45.145431715 +0000 UTC m=+0.200262406 container attach 8337d83a3b20511f9491db1eff44f848c238f9aca2ca371d00ed0fe9e00d41dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_khayyam, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]: {
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:    "0": [
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:        {
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:            "devices": [
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "/dev/loop3"
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:            ],
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:            "lv_name": "ceph_lv0",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:            "lv_size": "21470642176",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d33bc95-baf8-481d-bc78-3b15ffd29872,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:            "lv_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:            "name": "ceph_lv0",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:            "tags": {
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.block_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.cluster_name": "ceph",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.crush_device_class": "",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.encrypted": "0",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.objectstore": "bluestore",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.osd_fsid": "4d33bc95-baf8-481d-bc78-3b15ffd29872",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.osd_id": "0",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.type": "block",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.vdo": "0",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.with_tpm": "0"
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:            },
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:            "type": "block",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:            "vg_name": "ceph_vg0"
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:        }
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:    ],
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:    "1": [
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:        {
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:            "devices": [
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "/dev/loop4"
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:            ],
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:            "lv_name": "ceph_lv1",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:            "lv_size": "21470642176",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c4086f1b-ff53-4e63-8dc0-011238d77976,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:            "lv_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:            "name": "ceph_lv1",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:            "tags": {
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.block_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.cluster_name": "ceph",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.crush_device_class": "",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.encrypted": "0",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.objectstore": "bluestore",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.osd_fsid": "c4086f1b-ff53-4e63-8dc0-011238d77976",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.osd_id": "1",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.type": "block",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.vdo": "0",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.with_tpm": "0"
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:            },
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:            "type": "block",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:            "vg_name": "ceph_vg1"
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:        }
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:    ],
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:    "2": [
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:        {
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:            "devices": [
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "/dev/loop5"
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:            ],
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:            "lv_name": "ceph_lv2",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:            "lv_size": "21470642176",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=abcd6a67-9013-4470-978f-f75da5f33cd4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:            "lv_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:            "name": "ceph_lv2",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:            "tags": {
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.block_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.cluster_name": "ceph",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.crush_device_class": "",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.encrypted": "0",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.objectstore": "bluestore",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.osd_fsid": "abcd6a67-9013-4470-978f-f75da5f33cd4",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.osd_id": "2",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.type": "block",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.vdo": "0",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:                "ceph.with_tpm": "0"
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:            },
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:            "type": "block",
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:            "vg_name": "ceph_vg2"
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:        }
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]:    ]
Dec  3 16:26:45 np0005544708 keen_khayyam[238332]: }
Dec  3 16:26:45 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:26:45 np0005544708 systemd[1]: libpod-8337d83a3b20511f9491db1eff44f848c238f9aca2ca371d00ed0fe9e00d41dc.scope: Deactivated successfully.
Dec  3 16:26:45 np0005544708 podman[238314]: 2025-12-03 21:26:45.541986399 +0000 UTC m=+0.596817090 container died 8337d83a3b20511f9491db1eff44f848c238f9aca2ca371d00ed0fe9e00d41dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_khayyam, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 16:26:45 np0005544708 systemd[1]: var-lib-containers-storage-overlay-68876c974428ba1116d946dddf9b9b3f5eca22e7b7151ed243fd911faa852b87-merged.mount: Deactivated successfully.
Dec  3 16:26:45 np0005544708 podman[238314]: 2025-12-03 21:26:45.601881664 +0000 UTC m=+0.656712315 container remove 8337d83a3b20511f9491db1eff44f848c238f9aca2ca371d00ed0fe9e00d41dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_khayyam, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:26:45 np0005544708 systemd[1]: libpod-conmon-8337d83a3b20511f9491db1eff44f848c238f9aca2ca371d00ed0fe9e00d41dc.scope: Deactivated successfully.
Dec  3 16:26:45 np0005544708 python3.9[238551]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Dec  3 16:26:46 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v605: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:26:46 np0005544708 podman[238566]: 2025-12-03 21:26:46.112211196 +0000 UTC m=+0.037925708 container create d590af1934cdc2100ed46b59eab06d17a1d96eba7f2b1aa4ac08c97382012241 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_nash, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec  3 16:26:46 np0005544708 systemd[1]: Started libpod-conmon-d590af1934cdc2100ed46b59eab06d17a1d96eba7f2b1aa4ac08c97382012241.scope.
Dec  3 16:26:46 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:26:46 np0005544708 podman[238566]: 2025-12-03 21:26:46.175051499 +0000 UTC m=+0.100766031 container init d590af1934cdc2100ed46b59eab06d17a1d96eba7f2b1aa4ac08c97382012241 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:26:46 np0005544708 podman[238566]: 2025-12-03 21:26:46.180479374 +0000 UTC m=+0.106193876 container start d590af1934cdc2100ed46b59eab06d17a1d96eba7f2b1aa4ac08c97382012241 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_nash, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:26:46 np0005544708 podman[238566]: 2025-12-03 21:26:46.18408157 +0000 UTC m=+0.109796102 container attach d590af1934cdc2100ed46b59eab06d17a1d96eba7f2b1aa4ac08c97382012241 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_nash, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:26:46 np0005544708 nifty_nash[238605]: 167 167
Dec  3 16:26:46 np0005544708 systemd[1]: libpod-d590af1934cdc2100ed46b59eab06d17a1d96eba7f2b1aa4ac08c97382012241.scope: Deactivated successfully.
Dec  3 16:26:46 np0005544708 podman[238566]: 2025-12-03 21:26:46.185940181 +0000 UTC m=+0.111654713 container died d590af1934cdc2100ed46b59eab06d17a1d96eba7f2b1aa4ac08c97382012241 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:26:46 np0005544708 podman[238566]: 2025-12-03 21:26:46.096091254 +0000 UTC m=+0.021805776 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:26:46 np0005544708 systemd[1]: var-lib-containers-storage-overlay-674c565abb1d8ca3a91d8e5e3b5d7e10a3afe006930557859088e6b4f9ed1761-merged.mount: Deactivated successfully.
Dec  3 16:26:46 np0005544708 podman[238566]: 2025-12-03 21:26:46.232777396 +0000 UTC m=+0.158491938 container remove d590af1934cdc2100ed46b59eab06d17a1d96eba7f2b1aa4ac08c97382012241 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_nash, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  3 16:26:46 np0005544708 systemd[1]: libpod-conmon-d590af1934cdc2100ed46b59eab06d17a1d96eba7f2b1aa4ac08c97382012241.scope: Deactivated successfully.
Dec  3 16:26:46 np0005544708 podman[238687]: 2025-12-03 21:26:46.469467906 +0000 UTC m=+0.069764590 container create 9cb07886dee851a9de5c66cb78a119c21a2f40db861ba3c95456452fb8ce1a64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_lamport, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True)
Dec  3 16:26:46 np0005544708 systemd[1]: Started libpod-conmon-9cb07886dee851a9de5c66cb78a119c21a2f40db861ba3c95456452fb8ce1a64.scope.
Dec  3 16:26:46 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:26:46 np0005544708 podman[238687]: 2025-12-03 21:26:46.443288185 +0000 UTC m=+0.043584949 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:26:46 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e39015a02d201a1586ab9c00dd22aad65b783a5f1e21115adedb1c1ca10fe57b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:26:46 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e39015a02d201a1586ab9c00dd22aad65b783a5f1e21115adedb1c1ca10fe57b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:26:46 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e39015a02d201a1586ab9c00dd22aad65b783a5f1e21115adedb1c1ca10fe57b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:26:46 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e39015a02d201a1586ab9c00dd22aad65b783a5f1e21115adedb1c1ca10fe57b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:26:46 np0005544708 podman[238687]: 2025-12-03 21:26:46.552599724 +0000 UTC m=+0.152896438 container init 9cb07886dee851a9de5c66cb78a119c21a2f40db861ba3c95456452fb8ce1a64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_lamport, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:26:46 np0005544708 podman[238687]: 2025-12-03 21:26:46.558613425 +0000 UTC m=+0.158910109 container start 9cb07886dee851a9de5c66cb78a119c21a2f40db861ba3c95456452fb8ce1a64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  3 16:26:46 np0005544708 podman[238687]: 2025-12-03 21:26:46.561687057 +0000 UTC m=+0.161983741 container attach 9cb07886dee851a9de5c66cb78a119c21a2f40db861ba3c95456452fb8ce1a64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_lamport, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec  3 16:26:46 np0005544708 python3.9[238778]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  3 16:26:47 np0005544708 lvm[238923]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:26:47 np0005544708 lvm[238923]: VG ceph_vg1 finished
Dec  3 16:26:47 np0005544708 lvm[238921]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:26:47 np0005544708 lvm[238921]: VG ceph_vg0 finished
Dec  3 16:26:47 np0005544708 lvm[238931]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:26:47 np0005544708 lvm[238931]: VG ceph_vg2 finished
Dec  3 16:26:47 np0005544708 lvm[238937]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:26:47 np0005544708 lvm[238937]: VG ceph_vg1 finished
Dec  3 16:26:47 np0005544708 lvm[238957]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:26:47 np0005544708 lvm[238957]: VG ceph_vg1 finished
Dec  3 16:26:47 np0005544708 admiring_lamport[238744]: {}
Dec  3 16:26:47 np0005544708 systemd[1]: libpod-9cb07886dee851a9de5c66cb78a119c21a2f40db861ba3c95456452fb8ce1a64.scope: Deactivated successfully.
Dec  3 16:26:47 np0005544708 podman[238687]: 2025-12-03 21:26:47.40724977 +0000 UTC m=+1.007546444 container died 9cb07886dee851a9de5c66cb78a119c21a2f40db861ba3c95456452fb8ce1a64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_lamport, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  3 16:26:47 np0005544708 systemd[1]: libpod-9cb07886dee851a9de5c66cb78a119c21a2f40db861ba3c95456452fb8ce1a64.scope: Consumed 1.351s CPU time.
Dec  3 16:26:47 np0005544708 systemd[1]: var-lib-containers-storage-overlay-e39015a02d201a1586ab9c00dd22aad65b783a5f1e21115adedb1c1ca10fe57b-merged.mount: Deactivated successfully.
Dec  3 16:26:47 np0005544708 podman[238687]: 2025-12-03 21:26:47.451051533 +0000 UTC m=+1.051348207 container remove 9cb07886dee851a9de5c66cb78a119c21a2f40db861ba3c95456452fb8ce1a64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_lamport, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  3 16:26:47 np0005544708 systemd[1]: libpod-conmon-9cb07886dee851a9de5c66cb78a119c21a2f40db861ba3c95456452fb8ce1a64.scope: Deactivated successfully.
Dec  3 16:26:47 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:26:47 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:26:47 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:26:47 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:26:47 np0005544708 python3[239021]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Dec  3 16:26:48 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v606: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:26:48 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:26:48 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:26:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:26:48.930 151937 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:26:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:26:48.931 151937 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:26:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:26:48.931 151937 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:26:50 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v607: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:26:50 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:26:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:26:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:26:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:26:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:26:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:26:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:26:52 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v608: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:26:53 np0005544708 podman[239100]: 2025-12-03 21:26:53.463634921 +0000 UTC m=+3.440159644 container health_status 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:26:54 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v609: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:26:56 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v610: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:26:56 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:26:57 np0005544708 podman[239140]: 2025-12-03 21:26:57.600790686 +0000 UTC m=+1.532697612 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  3 16:26:58 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v611: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:26:58 np0005544708 podman[239061]: 2025-12-03 21:26:58.102276741 +0000 UTC m=+10.268996799 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec  3 16:26:58 np0005544708 podman[239189]: 2025-12-03 21:26:58.329842918 +0000 UTC m=+0.098252334 container create 5f2a8f92af126201e4bcea7b6c8994833207632e395a767ac859e71af7660679 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=nova_compute_init, org.label-schema.build-date=20251125)
Dec  3 16:26:58 np0005544708 podman[239189]: 2025-12-03 21:26:58.257763466 +0000 UTC m=+0.026172952 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec  3 16:26:58 np0005544708 python3[239021]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Dec  3 16:26:59 np0005544708 python3.9[239380]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:27:00 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v612: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:27:00 np0005544708 python3.9[239534]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Dec  3 16:27:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:27:01 np0005544708 python3.9[239686]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  3 16:27:02 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v613: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:27:02 np0005544708 python3[239838]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec  3 16:27:02 np0005544708 podman[239874]: 2025-12-03 21:27:02.548832125 +0000 UTC m=+0.068783493 container create 6df18d331f2ccf5e19017d42e753cb3908d6fceb137ed1521c71025bcc6b9255 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 16:27:02 np0005544708 podman[239874]: 2025-12-03 21:27:02.511526436 +0000 UTC m=+0.031477864 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec  3 16:27:02 np0005544708 python3[239838]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Dec  3 16:27:03 np0005544708 python3.9[240065]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:27:04 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v614: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:27:04 np0005544708 python3.9[240220]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:27:05 np0005544708 python3.9[240372]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764797224.4154875-1489-193927313095374/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  3 16:27:05 np0005544708 python3.9[240448]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  3 16:27:05 np0005544708 systemd[1]: Reloading.
Dec  3 16:27:05 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:27:05 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:27:06 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v615: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:27:06 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:27:06 np0005544708 python3.9[240559]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  3 16:27:06 np0005544708 systemd[1]: Reloading.
Dec  3 16:27:06 np0005544708 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  3 16:27:06 np0005544708 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  3 16:27:07 np0005544708 systemd[1]: Starting nova_compute container...
Dec  3 16:27:07 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:27:07 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5354cb7424d4793461dd352f1c9ab1213e1a51e8f188e4aa61b41b02cde121be/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec  3 16:27:07 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5354cb7424d4793461dd352f1c9ab1213e1a51e8f188e4aa61b41b02cde121be/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  3 16:27:07 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5354cb7424d4793461dd352f1c9ab1213e1a51e8f188e4aa61b41b02cde121be/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  3 16:27:07 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5354cb7424d4793461dd352f1c9ab1213e1a51e8f188e4aa61b41b02cde121be/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  3 16:27:07 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5354cb7424d4793461dd352f1c9ab1213e1a51e8f188e4aa61b41b02cde121be/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec  3 16:27:07 np0005544708 podman[240603]: 2025-12-03 21:27:07.393622618 +0000 UTC m=+0.144115902 container init 6df18d331f2ccf5e19017d42e753cb3908d6fceb137ed1521c71025bcc6b9255 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=nova_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Dec  3 16:27:07 np0005544708 podman[240603]: 2025-12-03 21:27:07.405907248 +0000 UTC m=+0.156400542 container start 6df18d331f2ccf5e19017d42e753cb3908d6fceb137ed1521c71025bcc6b9255 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  3 16:27:07 np0005544708 podman[240603]: nova_compute
Dec  3 16:27:07 np0005544708 nova_compute[240618]: + sudo -E kolla_set_configs
Dec  3 16:27:07 np0005544708 systemd[1]: Started nova_compute container.
Dec  3 16:27:07 np0005544708 nova_compute[240618]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  3 16:27:07 np0005544708 nova_compute[240618]: INFO:__main__:Validating config file
Dec  3 16:27:07 np0005544708 nova_compute[240618]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  3 16:27:07 np0005544708 nova_compute[240618]: INFO:__main__:Copying service configuration files
Dec  3 16:27:07 np0005544708 nova_compute[240618]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec  3 16:27:07 np0005544708 nova_compute[240618]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec  3 16:27:07 np0005544708 nova_compute[240618]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec  3 16:27:07 np0005544708 nova_compute[240618]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec  3 16:27:07 np0005544708 nova_compute[240618]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec  3 16:27:07 np0005544708 nova_compute[240618]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  3 16:27:07 np0005544708 nova_compute[240618]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  3 16:27:07 np0005544708 nova_compute[240618]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  3 16:27:07 np0005544708 nova_compute[240618]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  3 16:27:07 np0005544708 nova_compute[240618]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec  3 16:27:07 np0005544708 nova_compute[240618]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec  3 16:27:07 np0005544708 nova_compute[240618]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  3 16:27:07 np0005544708 nova_compute[240618]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  3 16:27:07 np0005544708 nova_compute[240618]: INFO:__main__:Deleting /etc/ceph
Dec  3 16:27:07 np0005544708 nova_compute[240618]: INFO:__main__:Creating directory /etc/ceph
Dec  3 16:27:07 np0005544708 nova_compute[240618]: INFO:__main__:Setting permission for /etc/ceph
Dec  3 16:27:07 np0005544708 nova_compute[240618]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec  3 16:27:07 np0005544708 nova_compute[240618]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec  3 16:27:07 np0005544708 nova_compute[240618]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec  3 16:27:07 np0005544708 nova_compute[240618]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec  3 16:27:07 np0005544708 nova_compute[240618]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec  3 16:27:07 np0005544708 nova_compute[240618]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  3 16:27:07 np0005544708 nova_compute[240618]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec  3 16:27:07 np0005544708 nova_compute[240618]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  3 16:27:07 np0005544708 nova_compute[240618]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec  3 16:27:07 np0005544708 nova_compute[240618]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec  3 16:27:07 np0005544708 nova_compute[240618]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec  3 16:27:07 np0005544708 nova_compute[240618]: INFO:__main__:Writing out command to execute
Dec  3 16:27:07 np0005544708 nova_compute[240618]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec  3 16:27:07 np0005544708 nova_compute[240618]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec  3 16:27:07 np0005544708 nova_compute[240618]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec  3 16:27:07 np0005544708 nova_compute[240618]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  3 16:27:07 np0005544708 nova_compute[240618]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  3 16:27:07 np0005544708 nova_compute[240618]: ++ cat /run_command
Dec  3 16:27:07 np0005544708 nova_compute[240618]: + CMD=nova-compute
Dec  3 16:27:07 np0005544708 nova_compute[240618]: + ARGS=
Dec  3 16:27:07 np0005544708 nova_compute[240618]: + sudo kolla_copy_cacerts
Dec  3 16:27:07 np0005544708 nova_compute[240618]: + [[ ! -n '' ]]
Dec  3 16:27:07 np0005544708 nova_compute[240618]: + . kolla_extend_start
Dec  3 16:27:07 np0005544708 nova_compute[240618]: Running command: 'nova-compute'
Dec  3 16:27:07 np0005544708 nova_compute[240618]: + echo 'Running command: '\''nova-compute'\'''
Dec  3 16:27:07 np0005544708 nova_compute[240618]: + umask 0022
Dec  3 16:27:07 np0005544708 nova_compute[240618]: + exec nova-compute
Dec  3 16:27:08 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v616: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:27:08 np0005544708 python3.9[240779]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:27:09 np0005544708 python3.9[240930]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:27:09 np0005544708 nova_compute[240618]: 2025-12-03 21:27:09.658 240622 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  3 16:27:09 np0005544708 nova_compute[240618]: 2025-12-03 21:27:09.658 240622 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  3 16:27:09 np0005544708 nova_compute[240618]: 2025-12-03 21:27:09.658 240622 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  3 16:27:09 np0005544708 nova_compute[240618]: 2025-12-03 21:27:09.659 240622 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Dec  3 16:27:09 np0005544708 nova_compute[240618]: 2025-12-03 21:27:09.785 240622 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:27:09 np0005544708 nova_compute[240618]: 2025-12-03 21:27:09.814 240622 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:27:09 np0005544708 nova_compute[240618]: 2025-12-03 21:27:09.815 240622 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Dec  3 16:27:10 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v617: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:27:10 np0005544708 python3.9[241086]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.491 240622 INFO nova.virt.driver [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.662 240622 INFO nova.compute.provider_config [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.683 240622 DEBUG oslo_concurrency.lockutils [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.684 240622 DEBUG oslo_concurrency.lockutils [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.684 240622 DEBUG oslo_concurrency.lockutils [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.684 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.684 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.685 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.685 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.685 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.685 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.685 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.685 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.685 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.686 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.686 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.686 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.686 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.686 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.686 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.686 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.687 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.687 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.687 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.687 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.687 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.687 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.687 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.688 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.688 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.688 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.688 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.688 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.688 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.688 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.689 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.689 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.689 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.689 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.689 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.689 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.689 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.689 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.690 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.690 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.690 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.690 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.690 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.690 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.691 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.691 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.691 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.691 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.691 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.691 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.691 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.692 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.692 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.692 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.692 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.692 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.692 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.692 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.693 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.693 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.693 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.693 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.693 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.693 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.693 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.693 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.694 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.694 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.694 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.694 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.694 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.694 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.694 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.694 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.695 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.695 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.695 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.695 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.695 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.695 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.696 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.696 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.696 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.696 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.696 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.696 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.696 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.696 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.697 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.697 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.697 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.697 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.697 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.697 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.697 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.697 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.698 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.698 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.698 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.698 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.698 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.698 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.698 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.699 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.699 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.699 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.699 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.699 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.699 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.699 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.699 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.700 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.700 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.700 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.700 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.700 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.700 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.700 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.701 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.701 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.701 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.701 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.701 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.701 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.701 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.701 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.702 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.702 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.702 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.702 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.702 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.702 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.702 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.703 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.703 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.703 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.703 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.703 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.703 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.703 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.703 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.704 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.704 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.704 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.704 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.704 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.704 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.704 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.705 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.705 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.705 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.705 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.705 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.705 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.705 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.706 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.706 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.706 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.706 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.706 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.706 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.706 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.707 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.707 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.707 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.707 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.707 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.708 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.708 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.708 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.708 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.708 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.708 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.708 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.708 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.709 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.709 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.709 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.709 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.709 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.709 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.709 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.710 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.710 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.710 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.710 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.710 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.710 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.710 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.711 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.711 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.711 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.711 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.711 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.711 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.711 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.712 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.712 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.712 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.712 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.712 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.712 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.712 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.712 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.713 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.713 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.713 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.713 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.713 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.713 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.713 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.714 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.714 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.714 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.714 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.714 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.714 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.714 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.715 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.715 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.715 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.715 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.715 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.715 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.715 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.716 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.716 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.716 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.716 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.716 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.716 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.716 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.716 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.717 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.717 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.717 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.717 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.717 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.717 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.717 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.718 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.718 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.718 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.718 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.718 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.718 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.718 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.719 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.719 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.719 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.719 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.719 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.719 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.719 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.720 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.720 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.720 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.720 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.720 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.720 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.720 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.721 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.721 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.721 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.721 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.721 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.721 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.721 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.721 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.722 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.722 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.722 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.722 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.722 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.722 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.722 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.723 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.723 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.723 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.723 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.723 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.723 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.723 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.724 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.724 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.724 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.724 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.724 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.724 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.724 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.725 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.725 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.725 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.725 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.725 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.725 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.725 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.726 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.726 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.726 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.726 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.726 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.726 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.726 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.727 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.727 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.727 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.727 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.727 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.727 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.727 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.728 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.728 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.728 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.728 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.728 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.728 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.728 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.729 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.729 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.729 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.729 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.729 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.729 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.729 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.730 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.730 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.730 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.730 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.730 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.730 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.730 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.731 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.731 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.731 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.731 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.731 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.731 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.731 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.732 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.732 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.732 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.732 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.732 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.732 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.732 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.733 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.733 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.733 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.733 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.733 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.733 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.733 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.733 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.734 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.734 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.734 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.734 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.734 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.735 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.735 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.735 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.735 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.735 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.735 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.735 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.736 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.736 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.736 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.736 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.736 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.736 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.736 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.736 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.737 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.737 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.737 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.737 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.737 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.737 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.737 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.738 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.738 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.738 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.738 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.738 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.738 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.738 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.739 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.739 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.739 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.739 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.739 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.739 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.739 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.740 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.740 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.740 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.740 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.740 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.740 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.740 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.741 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.741 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.741 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.741 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.741 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.741 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.741 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.742 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.742 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.742 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.742 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.742 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.742 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.742 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.743 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.743 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.743 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.743 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.743 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.743 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.743 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.744 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.744 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.744 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.744 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.744 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.744 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.745 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.745 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.745 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.745 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.745 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.745 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.745 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.746 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.746 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.746 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.746 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.746 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.746 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.746 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.747 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.747 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.747 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.747 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.747 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.747 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.747 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.748 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.748 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.748 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.748 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.748 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.748 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.748 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.749 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.749 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.749 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.749 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.749 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.749 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.749 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.750 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.750 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.750 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.750 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.750 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.750 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.751 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.751 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.751 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.751 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.751 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.751 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.751 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.752 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.752 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.752 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.752 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.752 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.752 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.752 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.753 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.753 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.753 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.753 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.753 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.753 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.753 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.754 240622 WARNING oslo_config.cfg [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec  3 16:27:10 np0005544708 nova_compute[240618]: live_migration_uri is deprecated for removal in favor of two other options that
Dec  3 16:27:10 np0005544708 nova_compute[240618]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec  3 16:27:10 np0005544708 nova_compute[240618]: and ``live_migration_inbound_addr`` respectively.
Dec  3 16:27:10 np0005544708 nova_compute[240618]: ).  Its value may be silently ignored in the future.#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.754 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.754 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.754 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.754 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.754 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.755 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.755 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.755 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.755 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.755 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.755 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.755 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.756 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.756 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.756 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.756 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.756 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.756 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.756 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.rbd_secret_uuid        = c21de27e-a7fd-594b-8324-0697ba9aab3a log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.757 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.757 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.757 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.757 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.757 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.757 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.757 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.758 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.758 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.758 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.758 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.758 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.758 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.758 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.759 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.759 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.759 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.759 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.759 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.759 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.760 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.760 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.760 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.760 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.760 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.760 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.760 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.761 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.761 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.761 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.761 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.761 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.761 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.761 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.762 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.762 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.762 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.762 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.762 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.762 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.762 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.762 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.763 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.763 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.763 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.763 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.763 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.763 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.763 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.764 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.764 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.764 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.764 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.764 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.764 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.764 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.765 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.765 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.765 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.765 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.765 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.765 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.765 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.765 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.766 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.766 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.766 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.766 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.766 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.766 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.766 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.767 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.767 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.767 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.767 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.767 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.767 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.767 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.768 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.768 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.768 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.768 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.768 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.768 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.768 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.768 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.769 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.769 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.769 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.769 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.769 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.769 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.769 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.770 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.770 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.770 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.770 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.770 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.770 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.770 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.770 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.771 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.771 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.771 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.771 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.771 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.771 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.771 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.772 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.772 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.772 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.772 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.772 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.772 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.772 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.773 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.773 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.773 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.773 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.773 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.773 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.773 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.774 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.774 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.774 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.774 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.774 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.774 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.774 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.775 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.775 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.775 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.775 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.775 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.775 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.775 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.776 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.776 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.776 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.776 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.776 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.776 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.777 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.777 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.777 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.777 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.777 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.777 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.777 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.778 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.778 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.778 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.778 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.778 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.778 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.778 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.779 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.779 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.779 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.779 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.779 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.779 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.779 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.780 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.780 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.780 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.780 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.780 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.780 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.780 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.781 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.781 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.781 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.781 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.781 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.781 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.781 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.782 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.782 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.782 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.782 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.782 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.782 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.782 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.783 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.783 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.783 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.783 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.783 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.783 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.783 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.784 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.784 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.784 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.784 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.784 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.784 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.784 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.784 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.785 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.785 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.785 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.785 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.785 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.785 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.785 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.786 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.786 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.786 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.786 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.786 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.786 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.786 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.786 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.787 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.787 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.787 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.787 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.787 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.787 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.787 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.788 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.788 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.788 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.788 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.788 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.788 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.788 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.788 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.789 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.789 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.789 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.789 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.789 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.789 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.790 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.790 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.790 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.790 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.790 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.790 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.790 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.791 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.791 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.791 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.791 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.791 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.791 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.791 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.791 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.792 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.792 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.792 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.792 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.792 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.792 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.792 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.793 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.793 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.793 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.793 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.793 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.793 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.793 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.794 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.794 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.794 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.794 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.794 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.794 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.794 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.795 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.795 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.795 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.795 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.795 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.795 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.795 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.796 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.796 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.796 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.796 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.796 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.796 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.796 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.797 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.797 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.797 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.797 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.797 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.797 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.797 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.797 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.798 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.798 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.798 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.798 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.798 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.798 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.798 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.799 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.799 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.799 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.799 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.799 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.799 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.799 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.800 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.800 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.800 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.800 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.800 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.800 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.800 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.801 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.801 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.801 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.801 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.801 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.801 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.801 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.802 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.802 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.802 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.802 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.802 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.802 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.802 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.803 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.803 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.803 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.803 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.803 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.803 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.803 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.803 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.804 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.804 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.804 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.804 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.804 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.804 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.804 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.805 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.805 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.805 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.805 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.805 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.805 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.805 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.805 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.806 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.806 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.806 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.806 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.806 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.806 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.806 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.807 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.807 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.807 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.807 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.807 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.807 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.807 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.808 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.808 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.808 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.808 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.808 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.808 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.808 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.808 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.809 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.809 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.809 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.809 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.809 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.810 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.810 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.810 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.810 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.810 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.810 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.810 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.811 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.811 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.811 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.811 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.811 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.811 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.811 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.812 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.812 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.812 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.812 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.812 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.812 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.812 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.813 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.813 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.813 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.813 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.813 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.813 240622 DEBUG oslo_service.service [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.814 240622 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.831 240622 DEBUG nova.virt.libvirt.host [None req-a441af6e-ce83-45ed-80c3-d269004b9802 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.832 240622 DEBUG nova.virt.libvirt.host [None req-a441af6e-ce83-45ed-80c3-d269004b9802 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.832 240622 DEBUG nova.virt.libvirt.host [None req-a441af6e-ce83-45ed-80c3-d269004b9802 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.832 240622 DEBUG nova.virt.libvirt.host [None req-a441af6e-ce83-45ed-80c3-d269004b9802 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Dec  3 16:27:10 np0005544708 systemd[1]: Starting libvirt QEMU daemon...
Dec  3 16:27:10 np0005544708 systemd[1]: Started libvirt QEMU daemon.
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.898 240622 DEBUG nova.virt.libvirt.host [None req-a441af6e-ce83-45ed-80c3-d269004b9802 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f238353aaf0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.901 240622 DEBUG nova.virt.libvirt.host [None req-a441af6e-ce83-45ed-80c3-d269004b9802 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f238353aaf0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.902 240622 INFO nova.virt.libvirt.driver [None req-a441af6e-ce83-45ed-80c3-d269004b9802 - - - - - -] Connection event '1' reason 'None'#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.926 240622 WARNING nova.virt.libvirt.driver [None req-a441af6e-ce83-45ed-80c3-d269004b9802 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec  3 16:27:10 np0005544708 nova_compute[240618]: 2025-12-03 21:27:10.926 240622 DEBUG nova.virt.libvirt.volume.mount [None req-a441af6e-ce83-45ed-80c3-d269004b9802 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Dec  3 16:27:11 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:27:11 np0005544708 python3.9[241291]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec  3 16:27:11 np0005544708 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 16:27:11 np0005544708 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 16:27:11 np0005544708 nova_compute[240618]: 2025-12-03 21:27:11.892 240622 INFO nova.virt.libvirt.host [None req-a441af6e-ce83-45ed-80c3-d269004b9802 - - - - - -] Libvirt host capabilities <capabilities>
Dec  3 16:27:11 np0005544708 nova_compute[240618]: 
Dec  3 16:27:11 np0005544708 nova_compute[240618]:  <host>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:    <uuid>fe808748-0a27-4a3c-9875-a9777da5fa17</uuid>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:    <cpu>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <arch>x86_64</arch>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model>EPYC-Rome-v4</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <vendor>AMD</vendor>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <microcode version='16777317'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <signature family='23' model='49' stepping='0'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <maxphysaddr mode='emulate' bits='40'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature name='x2apic'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature name='tsc-deadline'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature name='osxsave'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature name='hypervisor'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature name='tsc_adjust'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature name='spec-ctrl'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature name='stibp'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature name='arch-capabilities'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature name='ssbd'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature name='cmp_legacy'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature name='topoext'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature name='virt-ssbd'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature name='lbrv'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature name='tsc-scale'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature name='vmcb-clean'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature name='pause-filter'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature name='pfthreshold'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature name='svme-addr-chk'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature name='rdctl-no'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature name='skip-l1dfl-vmentry'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature name='mds-no'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature name='pschange-mc-no'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <pages unit='KiB' size='4'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <pages unit='KiB' size='2048'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <pages unit='KiB' size='1048576'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:    </cpu>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:    <power_management>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <suspend_mem/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:    </power_management>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:    <iommu support='no'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:    <migration_features>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <live/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <uri_transports>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <uri_transport>tcp</uri_transport>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <uri_transport>rdma</uri_transport>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </uri_transports>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:    </migration_features>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:    <topology>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <cells num='1'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <cell id='0'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:          <memory unit='KiB'>7864316</memory>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:          <pages unit='KiB' size='4'>1966079</pages>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:          <pages unit='KiB' size='2048'>0</pages>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:          <pages unit='KiB' size='1048576'>0</pages>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:          <distances>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:            <sibling id='0' value='10'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:          </distances>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:          <cpus num='8'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:          </cpus>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        </cell>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </cells>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:    </topology>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:    <cache>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:    </cache>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:    <secmodel>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model>selinux</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <doi>0</doi>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:    </secmodel>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:    <secmodel>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model>dac</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <doi>0</doi>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <baselabel type='kvm'>+107:+107</baselabel>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <baselabel type='qemu'>+107:+107</baselabel>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:    </secmodel>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:  </host>
Dec  3 16:27:11 np0005544708 nova_compute[240618]: 
Dec  3 16:27:11 np0005544708 nova_compute[240618]:  <guest>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:    <os_type>hvm</os_type>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:    <arch name='i686'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <wordsize>32</wordsize>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <domain type='qemu'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <domain type='kvm'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:    </arch>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:    <features>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <pae/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <nonpae/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <acpi default='on' toggle='yes'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <apic default='on' toggle='no'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <cpuselection/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <deviceboot/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <disksnapshot default='on' toggle='no'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <externalSnapshot/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:    </features>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:  </guest>
Dec  3 16:27:11 np0005544708 nova_compute[240618]: 
Dec  3 16:27:11 np0005544708 nova_compute[240618]:  <guest>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:    <os_type>hvm</os_type>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:    <arch name='x86_64'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <wordsize>64</wordsize>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <domain type='qemu'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <domain type='kvm'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:    </arch>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:    <features>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <acpi default='on' toggle='yes'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <apic default='on' toggle='no'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <cpuselection/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <deviceboot/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <disksnapshot default='on' toggle='no'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <externalSnapshot/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:    </features>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:  </guest>
Dec  3 16:27:11 np0005544708 nova_compute[240618]: 
Dec  3 16:27:11 np0005544708 nova_compute[240618]: </capabilities>
Dec  3 16:27:11 np0005544708 nova_compute[240618]: #033[00m
Dec  3 16:27:11 np0005544708 nova_compute[240618]: 2025-12-03 21:27:11.904 240622 DEBUG nova.virt.libvirt.host [None req-a441af6e-ce83-45ed-80c3-d269004b9802 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  3 16:27:11 np0005544708 nova_compute[240618]: 2025-12-03 21:27:11.937 240622 DEBUG nova.virt.libvirt.host [None req-a441af6e-ce83-45ed-80c3-d269004b9802 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec  3 16:27:11 np0005544708 nova_compute[240618]: <domainCapabilities>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:  <path>/usr/libexec/qemu-kvm</path>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:  <domain>kvm</domain>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:  <arch>i686</arch>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:  <vcpu max='4096'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:  <iothreads supported='yes'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:  <os supported='yes'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:    <enum name='firmware'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:    <loader supported='yes'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <enum name='type'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <value>rom</value>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <value>pflash</value>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <enum name='readonly'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <value>yes</value>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <value>no</value>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <enum name='secure'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <value>no</value>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:    </loader>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:  </os>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:  <cpu>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:    <mode name='host-passthrough' supported='yes'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <enum name='hostPassthroughMigratable'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <value>on</value>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <value>off</value>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:    </mode>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:    <mode name='maximum' supported='yes'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <enum name='maximumMigratable'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <value>on</value>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <value>off</value>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:    </mode>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:    <mode name='host-model' supported='yes'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <vendor>AMD</vendor>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature policy='require' name='x2apic'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature policy='require' name='tsc-deadline'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature policy='require' name='hypervisor'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature policy='require' name='tsc_adjust'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature policy='require' name='spec-ctrl'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature policy='require' name='stibp'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature policy='require' name='ssbd'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature policy='require' name='cmp_legacy'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature policy='require' name='overflow-recov'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature policy='require' name='succor'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature policy='require' name='ibrs'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature policy='require' name='amd-ssbd'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature policy='require' name='virt-ssbd'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature policy='require' name='lbrv'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature policy='require' name='tsc-scale'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature policy='require' name='vmcb-clean'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature policy='require' name='flushbyasid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature policy='require' name='pause-filter'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature policy='require' name='pfthreshold'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature policy='require' name='svme-addr-chk'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <feature policy='disable' name='xsaves'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:    </mode>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:    <mode name='custom' supported='yes'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='Broadwell'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='Broadwell-IBRS'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='Broadwell-noTSX'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='Broadwell-v1'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='Broadwell-v2'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='Broadwell-v3'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='Broadwell-v4'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='Cascadelake-Server'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='Cascadelake-Server-v1'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='Cascadelake-Server-v2'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='Cascadelake-Server-v3'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='Cascadelake-Server-v4'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='Cascadelake-Server-v5'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='Cooperlake'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='Cooperlake-v1'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='Cooperlake-v2'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='Denverton'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='mpx'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='Denverton-v1'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='mpx'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='Denverton-v2'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='Denverton-v3'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='Dhyana-v2'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='EPYC-Genoa'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='amd-psfd'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='auto-ibrs'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='no-nested-data-bp'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='null-sel-clr-base'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='stibp-always-on'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='EPYC-Genoa-v1'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='amd-psfd'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='auto-ibrs'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='no-nested-data-bp'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='null-sel-clr-base'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='stibp-always-on'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='EPYC-Milan'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='EPYC-Milan-v1'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='EPYC-Milan-v2'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='amd-psfd'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='no-nested-data-bp'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='null-sel-clr-base'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='stibp-always-on'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='EPYC-Rome'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='EPYC-Rome-v1'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='EPYC-Rome-v2'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='EPYC-Rome-v3'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='EPYC-v3'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='EPYC-v4'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='GraniteRapids'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='amx-bf16'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='amx-fp16'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='amx-int8'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='amx-tile'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx-vnni'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512-fp16'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='fbsdp-no'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='fsrc'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='fsrs'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='fzrm'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='mcdt-no'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pbrsb-no'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='prefetchiti'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='psdp-no'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='serialize'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='xfd'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='GraniteRapids-v1'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='amx-bf16'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='amx-fp16'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='amx-int8'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='amx-tile'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx-vnni'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512-fp16'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='fbsdp-no'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='fsrc'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='fsrs'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='fzrm'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='mcdt-no'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pbrsb-no'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='prefetchiti'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='psdp-no'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='serialize'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='xfd'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='GraniteRapids-v2'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='amx-bf16'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='amx-fp16'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='amx-int8'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='amx-tile'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx-vnni'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx10'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx10-128'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx10-256'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx10-512'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512-fp16'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='cldemote'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='fbsdp-no'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='fsrc'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='fsrs'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='fzrm'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='mcdt-no'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='movdir64b'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='movdiri'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pbrsb-no'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='prefetchiti'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='psdp-no'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='serialize'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='ss'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='xfd'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='Haswell'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='Haswell-IBRS'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='Haswell-noTSX'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='Haswell-v1'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='Haswell-v2'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='Haswell-v3'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='Haswell-v4'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='Icelake-Server'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='Icelake-Server-noTSX'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='Icelake-Server-v1'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='Icelake-Server-v2'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='Icelake-Server-v3'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='Icelake-Server-v4'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='Icelake-Server-v5'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='Icelake-Server-v6'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='Icelake-Server-v7'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='IvyBridge'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='IvyBridge-IBRS'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='IvyBridge-v1'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:      <blockers model='IvyBridge-v2'>
Dec  3 16:27:11 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='KnightsMill'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-4fmaps'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-4vnniw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512er'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512pf'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ss'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='KnightsMill-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-4fmaps'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-4vnniw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512er'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512pf'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ss'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Opteron_G4'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fma4'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xop'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Opteron_G4-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fma4'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xop'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Opteron_G5'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fma4'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='tbm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xop'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Opteron_G5-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fma4'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='tbm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xop'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='SapphireRapids'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-int8'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-tile'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-fp16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrc'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fzrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='serialize'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xfd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='SapphireRapids-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-int8'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-tile'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-fp16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrc'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fzrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='serialize'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xfd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='SapphireRapids-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-int8'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-tile'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-fp16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fbsdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrc'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fzrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='psdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='serialize'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xfd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='SapphireRapids-v3'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-int8'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-tile'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-fp16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='cldemote'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fbsdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrc'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fzrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdir64b'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdiri'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='psdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='serialize'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ss'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xfd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='SierraForest'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-ne-convert'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-vnni-int8'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='cmpccxadd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fbsdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='mcdt-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pbrsb-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='psdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='serialize'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='SierraForest-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-ne-convert'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-vnni-int8'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='cmpccxadd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fbsdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='mcdt-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pbrsb-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='psdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='serialize'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Client'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Client-IBRS'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Client-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Client-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Client-v3'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Client-v4'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Server'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Server-IBRS'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Server-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Server-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Server-v3'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Server-v4'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Server-v5'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Snowridge'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='cldemote'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='core-capability'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdir64b'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdiri'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='mpx'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='split-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Snowridge-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='cldemote'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='core-capability'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdir64b'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdiri'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='mpx'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='split-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Snowridge-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='cldemote'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='core-capability'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdir64b'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdiri'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='split-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Snowridge-v3'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='cldemote'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='core-capability'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdir64b'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdiri'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='split-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Snowridge-v4'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='cldemote'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdir64b'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdiri'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='athlon'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='3dnow'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='3dnowext'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='athlon-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='3dnow'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='3dnowext'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='core2duo'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ss'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='core2duo-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ss'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='coreduo'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ss'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='coreduo-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ss'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='n270'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ss'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='n270-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ss'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='phenom'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='3dnow'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='3dnowext'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='phenom-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='3dnow'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='3dnowext'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </mode>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  </cpu>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  <memoryBacking supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <enum name='sourceType'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <value>file</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <value>anonymous</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <value>memfd</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  </memoryBacking>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  <devices>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <disk supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='diskDevice'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>disk</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>cdrom</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>floppy</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>lun</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='bus'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>fdc</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>scsi</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>virtio</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>usb</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>sata</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='model'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>virtio</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>virtio-transitional</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>virtio-non-transitional</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </disk>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <graphics supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='type'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>vnc</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>egl-headless</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>dbus</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </graphics>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <video supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='modelType'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>vga</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>cirrus</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>virtio</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>none</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>bochs</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>ramfb</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </video>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <hostdev supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='mode'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>subsystem</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='startupPolicy'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>default</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>mandatory</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>requisite</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>optional</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='subsysType'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>usb</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>pci</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>scsi</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='capsType'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='pciBackend'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </hostdev>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <rng supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='model'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>virtio</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>virtio-transitional</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>virtio-non-transitional</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='backendModel'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>random</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>egd</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>builtin</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </rng>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <filesystem supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='driverType'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>path</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>handle</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>virtiofs</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </filesystem>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <tpm supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='model'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>tpm-tis</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>tpm-crb</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='backendModel'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>emulator</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>external</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='backendVersion'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>2.0</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </tpm>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <redirdev supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='bus'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>usb</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </redirdev>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <channel supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='type'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>pty</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>unix</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </channel>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <crypto supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='model'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='type'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>qemu</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='backendModel'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>builtin</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </crypto>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <interface supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='backendType'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>default</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>passt</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </interface>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <panic supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='model'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>isa</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>hyperv</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </panic>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <console supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='type'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>null</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>vc</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>pty</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>dev</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>file</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>pipe</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>stdio</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>udp</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>tcp</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>unix</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>qemu-vdagent</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>dbus</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </console>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  </devices>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  <features>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <gic supported='no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <vmcoreinfo supported='yes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <genid supported='yes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <backingStoreInput supported='yes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <backup supported='yes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <async-teardown supported='yes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <ps2 supported='yes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <sev supported='no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <sgx supported='no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <hyperv supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='features'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>relaxed</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>vapic</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>spinlocks</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>vpindex</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>runtime</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>synic</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>stimer</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>reset</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>vendor_id</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>frequencies</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>reenlightenment</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>tlbflush</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>ipi</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>avic</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>emsr_bitmap</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>xmm_input</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <defaults>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <spinlocks>4095</spinlocks>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <stimer_direct>on</stimer_direct>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <tlbflush_direct>on</tlbflush_direct>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <tlbflush_extended>on</tlbflush_extended>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </defaults>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </hyperv>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <launchSecurity supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='sectype'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>tdx</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </launchSecurity>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  </features>
Dec  3 16:27:12 np0005544708 nova_compute[240618]: </domainCapabilities>
Dec  3 16:27:12 np0005544708 nova_compute[240618]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  3 16:27:12 np0005544708 nova_compute[240618]: 2025-12-03 21:27:11.946 240622 DEBUG nova.virt.libvirt.host [None req-a441af6e-ce83-45ed-80c3-d269004b9802 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec  3 16:27:12 np0005544708 nova_compute[240618]: <domainCapabilities>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  <path>/usr/libexec/qemu-kvm</path>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  <domain>kvm</domain>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  <arch>i686</arch>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  <vcpu max='240'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  <iothreads supported='yes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  <os supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <enum name='firmware'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <loader supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='type'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>rom</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>pflash</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='readonly'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>yes</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>no</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='secure'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>no</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </loader>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  </os>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  <cpu>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <mode name='host-passthrough' supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='hostPassthroughMigratable'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>on</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>off</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </mode>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <mode name='maximum' supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='maximumMigratable'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>on</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>off</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </mode>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <mode name='host-model' supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <vendor>AMD</vendor>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='x2apic'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='tsc-deadline'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='hypervisor'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='tsc_adjust'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='spec-ctrl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='stibp'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='ssbd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='cmp_legacy'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='overflow-recov'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='succor'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='ibrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='amd-ssbd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='virt-ssbd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='lbrv'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='tsc-scale'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='vmcb-clean'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='flushbyasid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='pause-filter'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='pfthreshold'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='svme-addr-chk'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='disable' name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </mode>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <mode name='custom' supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Broadwell'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Broadwell-IBRS'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Broadwell-noTSX'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Broadwell-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Broadwell-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Broadwell-v3'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Broadwell-v4'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Cascadelake-Server'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Cascadelake-Server-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Cascadelake-Server-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Cascadelake-Server-v3'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Cascadelake-Server-v4'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Cascadelake-Server-v5'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Cooperlake'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Cooperlake-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Cooperlake-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Denverton'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='mpx'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Denverton-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='mpx'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Denverton-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Denverton-v3'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Dhyana-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='EPYC-Genoa'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amd-psfd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='auto-ibrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='no-nested-data-bp'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='null-sel-clr-base'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='stibp-always-on'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='EPYC-Genoa-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amd-psfd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='auto-ibrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='no-nested-data-bp'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='null-sel-clr-base'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='stibp-always-on'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='EPYC-Milan'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='EPYC-Milan-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='EPYC-Milan-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amd-psfd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='no-nested-data-bp'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='null-sel-clr-base'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='stibp-always-on'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='EPYC-Rome'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='EPYC-Rome-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='EPYC-Rome-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='EPYC-Rome-v3'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='EPYC-v3'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='EPYC-v4'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='GraniteRapids'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-fp16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-int8'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-tile'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-fp16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fbsdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrc'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fzrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='mcdt-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pbrsb-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='prefetchiti'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='psdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='serialize'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xfd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='GraniteRapids-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-fp16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-int8'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-tile'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-fp16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fbsdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrc'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fzrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='mcdt-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pbrsb-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='prefetchiti'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='psdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='serialize'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xfd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='GraniteRapids-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-fp16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-int8'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-tile'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx10'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx10-128'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx10-256'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx10-512'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-fp16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='cldemote'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fbsdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrc'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fzrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='mcdt-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdir64b'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdiri'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pbrsb-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='prefetchiti'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='psdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='serialize'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ss'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xfd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Haswell'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Haswell-IBRS'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Haswell-noTSX'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Haswell-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Haswell-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Haswell-v3'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Haswell-v4'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Icelake-Server'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Icelake-Server-noTSX'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Icelake-Server-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Icelake-Server-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Icelake-Server-v3'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Icelake-Server-v4'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Icelake-Server-v5'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Icelake-Server-v6'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Icelake-Server-v7'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='IvyBridge'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='IvyBridge-IBRS'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='IvyBridge-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='IvyBridge-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='KnightsMill'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-4fmaps'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-4vnniw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512er'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512pf'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ss'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='KnightsMill-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-4fmaps'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-4vnniw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512er'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512pf'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ss'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Opteron_G4'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fma4'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xop'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Opteron_G4-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fma4'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xop'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Opteron_G5'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fma4'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='tbm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xop'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Opteron_G5-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fma4'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='tbm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xop'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='SapphireRapids'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-int8'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-tile'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-fp16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrc'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fzrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='serialize'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xfd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='SapphireRapids-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-int8'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-tile'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-fp16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrc'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fzrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='serialize'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xfd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='SapphireRapids-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-int8'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-tile'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-fp16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fbsdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrc'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fzrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='psdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='serialize'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xfd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='SapphireRapids-v3'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-int8'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-tile'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-fp16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='cldemote'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fbsdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrc'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fzrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdir64b'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdiri'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='psdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='serialize'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ss'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v618: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xfd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='SierraForest'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-ne-convert'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-vnni-int8'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='cmpccxadd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fbsdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='mcdt-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pbrsb-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='psdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='serialize'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='SierraForest-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-ne-convert'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-vnni-int8'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='cmpccxadd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fbsdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='mcdt-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pbrsb-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='psdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='serialize'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Client'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Client-IBRS'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Client-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Client-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Client-v3'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Client-v4'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Server'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Server-IBRS'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Server-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Server-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Server-v3'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Server-v4'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Server-v5'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Snowridge'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='cldemote'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='core-capability'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdir64b'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdiri'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='mpx'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='split-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Snowridge-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='cldemote'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='core-capability'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdir64b'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdiri'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='mpx'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='split-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Snowridge-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='cldemote'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='core-capability'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdir64b'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdiri'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='split-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Snowridge-v3'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='cldemote'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='core-capability'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdir64b'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdiri'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='split-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Snowridge-v4'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='cldemote'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdir64b'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdiri'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='athlon'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='3dnow'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='3dnowext'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='athlon-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='3dnow'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='3dnowext'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='core2duo'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ss'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='core2duo-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ss'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='coreduo'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ss'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='coreduo-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ss'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='n270'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ss'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='n270-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ss'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='phenom'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='3dnow'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='3dnowext'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='phenom-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='3dnow'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='3dnowext'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </mode>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  </cpu>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  <memoryBacking supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <enum name='sourceType'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <value>file</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <value>anonymous</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <value>memfd</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  </memoryBacking>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  <devices>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <disk supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='diskDevice'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>disk</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>cdrom</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>floppy</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>lun</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='bus'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>ide</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>fdc</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>scsi</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>virtio</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>usb</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>sata</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='model'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>virtio</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>virtio-transitional</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>virtio-non-transitional</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </disk>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <graphics supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='type'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>vnc</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>egl-headless</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>dbus</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </graphics>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <video supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='modelType'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>vga</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>cirrus</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>virtio</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>none</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>bochs</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>ramfb</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </video>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <hostdev supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='mode'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>subsystem</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='startupPolicy'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>default</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>mandatory</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>requisite</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>optional</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='subsysType'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>usb</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>pci</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>scsi</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='capsType'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='pciBackend'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </hostdev>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <rng supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='model'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>virtio</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>virtio-transitional</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>virtio-non-transitional</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='backendModel'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>random</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>egd</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>builtin</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </rng>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <filesystem supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='driverType'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>path</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>handle</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>virtiofs</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </filesystem>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <tpm supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='model'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>tpm-tis</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>tpm-crb</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='backendModel'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>emulator</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>external</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='backendVersion'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>2.0</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </tpm>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <redirdev supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='bus'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>usb</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </redirdev>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <channel supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='type'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>pty</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>unix</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </channel>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <crypto supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='model'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='type'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>qemu</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='backendModel'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>builtin</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </crypto>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <interface supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='backendType'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>default</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>passt</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </interface>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <panic supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='model'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>isa</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>hyperv</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </panic>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <console supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='type'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>null</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>vc</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>pty</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>dev</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>file</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>pipe</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>stdio</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>udp</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>tcp</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>unix</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>qemu-vdagent</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>dbus</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </console>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  </devices>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  <features>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <gic supported='no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <vmcoreinfo supported='yes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <genid supported='yes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <backingStoreInput supported='yes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <backup supported='yes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <async-teardown supported='yes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <ps2 supported='yes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <sev supported='no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <sgx supported='no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <hyperv supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='features'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>relaxed</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>vapic</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>spinlocks</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>vpindex</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>runtime</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>synic</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>stimer</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>reset</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>vendor_id</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>frequencies</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>reenlightenment</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>tlbflush</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>ipi</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>avic</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>emsr_bitmap</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>xmm_input</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <defaults>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <spinlocks>4095</spinlocks>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <stimer_direct>on</stimer_direct>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <tlbflush_direct>on</tlbflush_direct>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <tlbflush_extended>on</tlbflush_extended>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </defaults>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </hyperv>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <launchSecurity supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='sectype'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>tdx</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </launchSecurity>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  </features>
Dec  3 16:27:12 np0005544708 nova_compute[240618]: </domainCapabilities>
Dec  3 16:27:12 np0005544708 nova_compute[240618]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  3 16:27:12 np0005544708 nova_compute[240618]: 2025-12-03 21:27:12.000 240622 DEBUG nova.virt.libvirt.host [None req-a441af6e-ce83-45ed-80c3-d269004b9802 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  3 16:27:12 np0005544708 nova_compute[240618]: 2025-12-03 21:27:12.006 240622 DEBUG nova.virt.libvirt.host [None req-a441af6e-ce83-45ed-80c3-d269004b9802 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec  3 16:27:12 np0005544708 nova_compute[240618]: <domainCapabilities>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  <path>/usr/libexec/qemu-kvm</path>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  <domain>kvm</domain>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  <arch>x86_64</arch>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  <vcpu max='240'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  <iothreads supported='yes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  <os supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <enum name='firmware'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <loader supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='type'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>rom</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>pflash</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='readonly'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>yes</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>no</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='secure'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>no</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </loader>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  </os>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  <cpu>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <mode name='host-passthrough' supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='hostPassthroughMigratable'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>on</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>off</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </mode>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <mode name='maximum' supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='maximumMigratable'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>on</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>off</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </mode>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <mode name='host-model' supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <vendor>AMD</vendor>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='x2apic'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='tsc-deadline'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='hypervisor'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='tsc_adjust'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='spec-ctrl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='stibp'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='ssbd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='cmp_legacy'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='overflow-recov'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='succor'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='ibrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='amd-ssbd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='virt-ssbd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='lbrv'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='tsc-scale'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='vmcb-clean'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='flushbyasid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='pause-filter'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='pfthreshold'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='svme-addr-chk'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='disable' name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </mode>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <mode name='custom' supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Broadwell'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Broadwell-IBRS'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Broadwell-noTSX'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Broadwell-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Broadwell-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Broadwell-v3'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Broadwell-v4'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Cascadelake-Server'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Cascadelake-Server-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Cascadelake-Server-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Cascadelake-Server-v3'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Cascadelake-Server-v4'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Cascadelake-Server-v5'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Cooperlake'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Cooperlake-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Cooperlake-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Denverton'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='mpx'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Denverton-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='mpx'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Denverton-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Denverton-v3'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Dhyana-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='EPYC-Genoa'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amd-psfd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='auto-ibrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='no-nested-data-bp'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='null-sel-clr-base'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='stibp-always-on'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='EPYC-Genoa-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amd-psfd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='auto-ibrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='no-nested-data-bp'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='null-sel-clr-base'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='stibp-always-on'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='EPYC-Milan'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='EPYC-Milan-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='EPYC-Milan-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amd-psfd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='no-nested-data-bp'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='null-sel-clr-base'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='stibp-always-on'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='EPYC-Rome'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='EPYC-Rome-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='EPYC-Rome-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='EPYC-Rome-v3'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='EPYC-v3'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='EPYC-v4'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='GraniteRapids'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-fp16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-int8'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-tile'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-fp16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fbsdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrc'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fzrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='mcdt-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pbrsb-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='prefetchiti'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='psdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='serialize'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xfd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='GraniteRapids-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-fp16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-int8'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-tile'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-fp16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fbsdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrc'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fzrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='mcdt-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pbrsb-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='prefetchiti'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='psdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='serialize'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xfd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='GraniteRapids-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-fp16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-int8'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-tile'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx10'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx10-128'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx10-256'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx10-512'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-fp16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='cldemote'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fbsdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrc'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fzrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='mcdt-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdir64b'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdiri'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pbrsb-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='prefetchiti'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='psdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='serialize'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ss'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xfd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Haswell'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Haswell-IBRS'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Haswell-noTSX'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Haswell-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Haswell-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Haswell-v3'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Haswell-v4'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Icelake-Server'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Icelake-Server-noTSX'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Icelake-Server-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Icelake-Server-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Icelake-Server-v3'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Icelake-Server-v4'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Icelake-Server-v5'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Icelake-Server-v6'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Icelake-Server-v7'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='IvyBridge'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='IvyBridge-IBRS'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='IvyBridge-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='IvyBridge-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='KnightsMill'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-4fmaps'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-4vnniw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512er'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512pf'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ss'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='KnightsMill-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-4fmaps'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-4vnniw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512er'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512pf'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ss'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Opteron_G4'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fma4'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xop'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Opteron_G4-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fma4'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xop'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Opteron_G5'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fma4'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='tbm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xop'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Opteron_G5-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fma4'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='tbm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xop'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='SapphireRapids'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-int8'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-tile'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-fp16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrc'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fzrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='serialize'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xfd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='SapphireRapids-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-int8'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-tile'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-fp16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrc'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fzrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='serialize'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xfd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='SapphireRapids-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-int8'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-tile'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-fp16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fbsdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrc'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fzrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='psdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='serialize'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xfd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='SapphireRapids-v3'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-int8'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-tile'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-fp16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='cldemote'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fbsdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrc'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fzrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdir64b'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdiri'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='psdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='serialize'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ss'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xfd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='SierraForest'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-ne-convert'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-vnni-int8'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='cmpccxadd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fbsdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='mcdt-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pbrsb-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='psdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='serialize'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='SierraForest-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-ne-convert'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-vnni-int8'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='cmpccxadd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fbsdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='mcdt-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pbrsb-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='psdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='serialize'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Client'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Client-IBRS'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Client-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Client-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Client-v3'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Client-v4'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Server'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Server-IBRS'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Server-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Server-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Server-v3'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Server-v4'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Server-v5'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Snowridge'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='cldemote'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='core-capability'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdir64b'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdiri'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='mpx'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='split-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Snowridge-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='cldemote'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='core-capability'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdir64b'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdiri'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='mpx'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='split-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Snowridge-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='cldemote'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='core-capability'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdir64b'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdiri'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='split-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Snowridge-v3'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='cldemote'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='core-capability'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdir64b'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdiri'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='split-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Snowridge-v4'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='cldemote'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdir64b'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdiri'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='athlon'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='3dnow'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='3dnowext'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='athlon-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='3dnow'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='3dnowext'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='core2duo'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ss'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='core2duo-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ss'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='coreduo'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ss'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='coreduo-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ss'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='n270'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ss'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='n270-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ss'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='phenom'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='3dnow'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='3dnowext'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='phenom-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='3dnow'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='3dnowext'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </mode>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  </cpu>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  <memoryBacking supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <enum name='sourceType'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <value>file</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <value>anonymous</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <value>memfd</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  </memoryBacking>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  <devices>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <disk supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='diskDevice'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>disk</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>cdrom</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>floppy</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>lun</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='bus'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>ide</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>fdc</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>scsi</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>virtio</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>usb</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>sata</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='model'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>virtio</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>virtio-transitional</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>virtio-non-transitional</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </disk>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <graphics supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='type'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>vnc</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>egl-headless</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>dbus</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </graphics>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <video supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='modelType'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>vga</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>cirrus</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>virtio</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>none</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>bochs</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>ramfb</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </video>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <hostdev supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='mode'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>subsystem</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='startupPolicy'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>default</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>mandatory</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>requisite</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>optional</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='subsysType'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>usb</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>pci</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>scsi</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='capsType'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='pciBackend'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </hostdev>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <rng supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='model'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>virtio</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>virtio-transitional</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>virtio-non-transitional</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='backendModel'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>random</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>egd</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>builtin</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </rng>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <filesystem supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='driverType'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>path</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>handle</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>virtiofs</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </filesystem>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <tpm supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='model'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>tpm-tis</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>tpm-crb</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='backendModel'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>emulator</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>external</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='backendVersion'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>2.0</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </tpm>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <redirdev supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='bus'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>usb</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </redirdev>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <channel supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='type'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>pty</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>unix</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </channel>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <crypto supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='model'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='type'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>qemu</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='backendModel'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>builtin</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </crypto>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <interface supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='backendType'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>default</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>passt</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </interface>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <panic supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='model'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>isa</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>hyperv</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </panic>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <console supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='type'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>null</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>vc</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>pty</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>dev</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>file</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>pipe</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>stdio</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>udp</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>tcp</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>unix</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>qemu-vdagent</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>dbus</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </console>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  </devices>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  <features>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <gic supported='no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <vmcoreinfo supported='yes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <genid supported='yes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <backingStoreInput supported='yes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <backup supported='yes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <async-teardown supported='yes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <ps2 supported='yes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <sev supported='no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <sgx supported='no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <hyperv supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='features'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>relaxed</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>vapic</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>spinlocks</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>vpindex</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>runtime</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>synic</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>stimer</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>reset</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>vendor_id</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>frequencies</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>reenlightenment</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>tlbflush</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>ipi</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>avic</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>emsr_bitmap</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>xmm_input</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <defaults>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <spinlocks>4095</spinlocks>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <stimer_direct>on</stimer_direct>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <tlbflush_direct>on</tlbflush_direct>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <tlbflush_extended>on</tlbflush_extended>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </defaults>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </hyperv>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <launchSecurity supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='sectype'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>tdx</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </launchSecurity>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  </features>
Dec  3 16:27:12 np0005544708 nova_compute[240618]: </domainCapabilities>
Dec  3 16:27:12 np0005544708 nova_compute[240618]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  3 16:27:12 np0005544708 nova_compute[240618]: 2025-12-03 21:27:12.068 240622 DEBUG nova.virt.libvirt.host [None req-a441af6e-ce83-45ed-80c3-d269004b9802 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec  3 16:27:12 np0005544708 nova_compute[240618]: <domainCapabilities>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  <path>/usr/libexec/qemu-kvm</path>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  <domain>kvm</domain>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  <arch>x86_64</arch>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  <vcpu max='4096'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  <iothreads supported='yes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  <os supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <enum name='firmware'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <value>efi</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <loader supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='type'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>rom</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>pflash</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='readonly'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>yes</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>no</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='secure'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>yes</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>no</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </loader>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  </os>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  <cpu>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <mode name='host-passthrough' supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='hostPassthroughMigratable'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>on</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>off</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </mode>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <mode name='maximum' supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='maximumMigratable'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>on</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>off</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </mode>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <mode name='host-model' supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <vendor>AMD</vendor>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='x2apic'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='tsc-deadline'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='hypervisor'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='tsc_adjust'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='spec-ctrl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='stibp'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='ssbd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='cmp_legacy'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='overflow-recov'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='succor'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='ibrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='amd-ssbd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='virt-ssbd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='lbrv'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='tsc-scale'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='vmcb-clean'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='flushbyasid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='pause-filter'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='pfthreshold'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='svme-addr-chk'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <feature policy='disable' name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </mode>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <mode name='custom' supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Broadwell'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Broadwell-IBRS'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Broadwell-noTSX'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Broadwell-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Broadwell-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Broadwell-v3'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Broadwell-v4'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Cascadelake-Server'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Cascadelake-Server-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Cascadelake-Server-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Cascadelake-Server-v3'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Cascadelake-Server-v4'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Cascadelake-Server-v5'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Cooperlake'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Cooperlake-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Cooperlake-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Denverton'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='mpx'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Denverton-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='mpx'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Denverton-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Denverton-v3'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Dhyana-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='EPYC-Genoa'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amd-psfd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='auto-ibrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='no-nested-data-bp'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='null-sel-clr-base'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='stibp-always-on'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='EPYC-Genoa-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amd-psfd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='auto-ibrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='no-nested-data-bp'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='null-sel-clr-base'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='stibp-always-on'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='EPYC-Milan'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='EPYC-Milan-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='EPYC-Milan-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amd-psfd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='no-nested-data-bp'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='null-sel-clr-base'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='stibp-always-on'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='EPYC-Rome'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='EPYC-Rome-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='EPYC-Rome-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='EPYC-Rome-v3'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='EPYC-v3'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='EPYC-v4'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='GraniteRapids'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-fp16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-int8'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-tile'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-fp16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fbsdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrc'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fzrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='mcdt-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pbrsb-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='prefetchiti'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='psdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='serialize'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xfd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='GraniteRapids-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-fp16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-int8'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-tile'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-fp16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fbsdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrc'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fzrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='mcdt-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pbrsb-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='prefetchiti'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='psdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='serialize'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xfd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='GraniteRapids-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-fp16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-int8'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-tile'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx10'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx10-128'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx10-256'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx10-512'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-fp16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='cldemote'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fbsdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrc'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fzrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='mcdt-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdir64b'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdiri'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pbrsb-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='prefetchiti'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='psdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='serialize'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ss'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xfd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Haswell'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Haswell-IBRS'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Haswell-noTSX'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Haswell-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Haswell-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Haswell-v3'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Haswell-v4'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Icelake-Server'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Icelake-Server-noTSX'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Icelake-Server-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Icelake-Server-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Icelake-Server-v3'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Icelake-Server-v4'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Icelake-Server-v5'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Icelake-Server-v6'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Icelake-Server-v7'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='IvyBridge'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='IvyBridge-IBRS'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='IvyBridge-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='IvyBridge-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='KnightsMill'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-4fmaps'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-4vnniw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512er'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512pf'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ss'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='KnightsMill-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-4fmaps'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-4vnniw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512er'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512pf'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ss'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Opteron_G4'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fma4'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xop'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Opteron_G4-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fma4'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xop'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Opteron_G5'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fma4'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='tbm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xop'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Opteron_G5-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fma4'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='tbm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xop'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='SapphireRapids'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-int8'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-tile'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-fp16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrc'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fzrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='serialize'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xfd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='SapphireRapids-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-int8'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-tile'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-fp16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrc'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fzrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='serialize'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xfd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='SapphireRapids-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-int8'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-tile'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-fp16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fbsdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrc'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fzrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='psdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='serialize'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xfd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='SapphireRapids-v3'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-int8'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='amx-tile'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-bf16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-fp16'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bitalg'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='cldemote'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fbsdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrc'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fzrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='la57'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdir64b'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdiri'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='psdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='serialize'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ss'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='taa-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xfd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='SierraForest'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-ne-convert'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-vnni-int8'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='cmpccxadd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fbsdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='mcdt-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pbrsb-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='psdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='serialize'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='SierraForest-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-ifma'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-ne-convert'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-vnni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx-vnni-int8'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='cmpccxadd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fbsdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='fsrs'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ibrs-all'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='mcdt-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pbrsb-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='psdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='serialize'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vaes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Client'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Client-IBRS'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Client-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Client-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Client-v3'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Client-v4'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Server'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Server-IBRS'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Server-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Server-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='hle'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='rtm'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Server-v3'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Server-v4'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Skylake-Server-v5'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512bw'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512cd'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512dq'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512f'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='avx512vl'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='invpcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pcid'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='pku'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Snowridge'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='cldemote'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='core-capability'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdir64b'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdiri'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='mpx'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='split-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Snowridge-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='cldemote'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='core-capability'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdir64b'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdiri'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='mpx'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='split-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Snowridge-v2'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='cldemote'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='core-capability'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdir64b'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdiri'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='split-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Snowridge-v3'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='cldemote'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='core-capability'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdir64b'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdiri'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='split-lock-detect'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='Snowridge-v4'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='cldemote'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='erms'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='gfni'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdir64b'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='movdiri'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='xsaves'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='athlon'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='3dnow'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='3dnowext'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='athlon-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='3dnow'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='3dnowext'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='core2duo'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ss'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='core2duo-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ss'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='coreduo'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ss'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='coreduo-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ss'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='n270'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ss'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='n270-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='ss'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='phenom'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='3dnow'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='3dnowext'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <blockers model='phenom-v1'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='3dnow'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <feature name='3dnowext'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </blockers>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </mode>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  </cpu>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  <memoryBacking supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <enum name='sourceType'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <value>file</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <value>anonymous</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <value>memfd</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  </memoryBacking>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  <devices>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <disk supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='diskDevice'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>disk</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>cdrom</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>floppy</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>lun</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='bus'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>fdc</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>scsi</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>virtio</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>usb</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>sata</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='model'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>virtio</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>virtio-transitional</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>virtio-non-transitional</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </disk>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <graphics supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='type'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>vnc</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>egl-headless</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>dbus</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </graphics>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <video supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='modelType'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>vga</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>cirrus</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>virtio</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>none</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>bochs</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>ramfb</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </video>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <hostdev supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='mode'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>subsystem</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='startupPolicy'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>default</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>mandatory</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>requisite</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>optional</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='subsysType'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>usb</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>pci</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>scsi</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='capsType'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='pciBackend'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </hostdev>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <rng supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='model'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>virtio</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>virtio-transitional</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>virtio-non-transitional</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='backendModel'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>random</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>egd</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>builtin</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </rng>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <filesystem supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='driverType'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>path</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>handle</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>virtiofs</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </filesystem>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <tpm supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='model'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>tpm-tis</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>tpm-crb</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='backendModel'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>emulator</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>external</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='backendVersion'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>2.0</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </tpm>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <redirdev supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='bus'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>usb</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </redirdev>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <channel supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='type'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>pty</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>unix</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </channel>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <crypto supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='model'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='type'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>qemu</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='backendModel'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>builtin</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </crypto>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <interface supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='backendType'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>default</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>passt</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </interface>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <panic supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='model'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>isa</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>hyperv</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </panic>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <console supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='type'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>null</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>vc</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>pty</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>dev</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>file</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>pipe</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>stdio</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>udp</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>tcp</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>unix</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>qemu-vdagent</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>dbus</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </console>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  </devices>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  <features>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <gic supported='no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <vmcoreinfo supported='yes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <genid supported='yes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <backingStoreInput supported='yes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <backup supported='yes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <async-teardown supported='yes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <ps2 supported='yes'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <sev supported='no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <sgx supported='no'/>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <hyperv supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='features'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>relaxed</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>vapic</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>spinlocks</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>vpindex</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>runtime</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>synic</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>stimer</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>reset</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>vendor_id</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>frequencies</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>reenlightenment</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>tlbflush</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>ipi</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>avic</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>emsr_bitmap</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>xmm_input</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <defaults>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <spinlocks>4095</spinlocks>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <stimer_direct>on</stimer_direct>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <tlbflush_direct>on</tlbflush_direct>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <tlbflush_extended>on</tlbflush_extended>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </defaults>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </hyperv>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    <launchSecurity supported='yes'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      <enum name='sectype'>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:        <value>tdx</value>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:      </enum>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:    </launchSecurity>
Dec  3 16:27:12 np0005544708 nova_compute[240618]:  </features>
Dec  3 16:27:12 np0005544708 nova_compute[240618]: </domainCapabilities>
Dec  3 16:27:12 np0005544708 nova_compute[240618]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  3 16:27:12 np0005544708 nova_compute[240618]: 2025-12-03 21:27:12.124 240622 DEBUG nova.virt.libvirt.host [None req-a441af6e-ce83-45ed-80c3-d269004b9802 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  3 16:27:12 np0005544708 nova_compute[240618]: 2025-12-03 21:27:12.125 240622 DEBUG nova.virt.libvirt.host [None req-a441af6e-ce83-45ed-80c3-d269004b9802 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  3 16:27:12 np0005544708 nova_compute[240618]: 2025-12-03 21:27:12.125 240622 DEBUG nova.virt.libvirt.host [None req-a441af6e-ce83-45ed-80c3-d269004b9802 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  3 16:27:12 np0005544708 nova_compute[240618]: 2025-12-03 21:27:12.125 240622 INFO nova.virt.libvirt.host [None req-a441af6e-ce83-45ed-80c3-d269004b9802 - - - - - -] Secure Boot support detected#033[00m
Dec  3 16:27:12 np0005544708 nova_compute[240618]: 2025-12-03 21:27:12.128 240622 INFO nova.virt.libvirt.driver [None req-a441af6e-ce83-45ed-80c3-d269004b9802 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  3 16:27:12 np0005544708 nova_compute[240618]: 2025-12-03 21:27:12.128 240622 INFO nova.virt.libvirt.driver [None req-a441af6e-ce83-45ed-80c3-d269004b9802 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  3 16:27:12 np0005544708 nova_compute[240618]: 2025-12-03 21:27:12.144 240622 DEBUG nova.virt.libvirt.driver [None req-a441af6e-ce83-45ed-80c3-d269004b9802 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Dec  3 16:27:12 np0005544708 nova_compute[240618]: 2025-12-03 21:27:12.185 240622 INFO nova.virt.node [None req-a441af6e-ce83-45ed-80c3-d269004b9802 - - - - - -] Determined node identity 94aba67c-5c5e-45d0-83d1-33eb467c8775 from /var/lib/nova/compute_id#033[00m
Dec  3 16:27:12 np0005544708 nova_compute[240618]: 2025-12-03 21:27:12.204 240622 WARNING nova.compute.manager [None req-a441af6e-ce83-45ed-80c3-d269004b9802 - - - - - -] Compute nodes ['94aba67c-5c5e-45d0-83d1-33eb467c8775'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Dec  3 16:27:12 np0005544708 nova_compute[240618]: 2025-12-03 21:27:12.247 240622 INFO nova.compute.manager [None req-a441af6e-ce83-45ed-80c3-d269004b9802 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Dec  3 16:27:12 np0005544708 nova_compute[240618]: 2025-12-03 21:27:12.301 240622 WARNING nova.compute.manager [None req-a441af6e-ce83-45ed-80c3-d269004b9802 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec  3 16:27:12 np0005544708 nova_compute[240618]: 2025-12-03 21:27:12.302 240622 DEBUG oslo_concurrency.lockutils [None req-a441af6e-ce83-45ed-80c3-d269004b9802 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:27:12 np0005544708 nova_compute[240618]: 2025-12-03 21:27:12.302 240622 DEBUG oslo_concurrency.lockutils [None req-a441af6e-ce83-45ed-80c3-d269004b9802 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:27:12 np0005544708 nova_compute[240618]: 2025-12-03 21:27:12.303 240622 DEBUG oslo_concurrency.lockutils [None req-a441af6e-ce83-45ed-80c3-d269004b9802 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:27:12 np0005544708 nova_compute[240618]: 2025-12-03 21:27:12.303 240622 DEBUG nova.compute.resource_tracker [None req-a441af6e-ce83-45ed-80c3-d269004b9802 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 16:27:12 np0005544708 nova_compute[240618]: 2025-12-03 21:27:12.304 240622 DEBUG oslo_concurrency.processutils [None req-a441af6e-ce83-45ed-80c3-d269004b9802 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:27:12 np0005544708 python3.9[241478]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  3 16:27:12 np0005544708 systemd[1]: Stopping nova_compute container...
Dec  3 16:27:12 np0005544708 nova_compute[240618]: 2025-12-03 21:27:12.677 240622 DEBUG oslo_concurrency.lockutils [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 16:27:12 np0005544708 nova_compute[240618]: 2025-12-03 21:27:12.678 240622 DEBUG oslo_concurrency.lockutils [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 16:27:12 np0005544708 nova_compute[240618]: 2025-12-03 21:27:12.678 240622 DEBUG oslo_concurrency.lockutils [None req-1d878bf7-83de-4ce0-b7c1-ade1844721e4 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 16:27:13 np0005544708 virtqemud[241184]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec  3 16:27:13 np0005544708 virtqemud[241184]: hostname: compute-0
Dec  3 16:27:13 np0005544708 virtqemud[241184]: End of file while reading data: Input/output error
Dec  3 16:27:13 np0005544708 systemd[1]: libpod-6df18d331f2ccf5e19017d42e753cb3908d6fceb137ed1521c71025bcc6b9255.scope: Deactivated successfully.
Dec  3 16:27:13 np0005544708 systemd[1]: libpod-6df18d331f2ccf5e19017d42e753cb3908d6fceb137ed1521c71025bcc6b9255.scope: Consumed 3.544s CPU time.
Dec  3 16:27:13 np0005544708 podman[241502]: 2025-12-03 21:27:13.045315798 +0000 UTC m=+0.418307317 container died 6df18d331f2ccf5e19017d42e753cb3908d6fceb137ed1521c71025bcc6b9255 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:27:13 np0005544708 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6df18d331f2ccf5e19017d42e753cb3908d6fceb137ed1521c71025bcc6b9255-userdata-shm.mount: Deactivated successfully.
Dec  3 16:27:13 np0005544708 systemd[1]: var-lib-containers-storage-overlay-5354cb7424d4793461dd352f1c9ab1213e1a51e8f188e4aa61b41b02cde121be-merged.mount: Deactivated successfully.
Dec  3 16:27:13 np0005544708 podman[241502]: 2025-12-03 21:27:13.891690833 +0000 UTC m=+1.264682362 container cleanup 6df18d331f2ccf5e19017d42e753cb3908d6fceb137ed1521c71025bcc6b9255 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=nova_compute, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 16:27:13 np0005544708 podman[241502]: nova_compute
Dec  3 16:27:13 np0005544708 podman[241537]: nova_compute
Dec  3 16:27:13 np0005544708 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Dec  3 16:27:13 np0005544708 systemd[1]: Stopped nova_compute container.
Dec  3 16:27:13 np0005544708 systemd[1]: Starting nova_compute container...
Dec  3 16:27:14 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v619: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:27:14 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:27:14 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5354cb7424d4793461dd352f1c9ab1213e1a51e8f188e4aa61b41b02cde121be/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec  3 16:27:14 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5354cb7424d4793461dd352f1c9ab1213e1a51e8f188e4aa61b41b02cde121be/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  3 16:27:14 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5354cb7424d4793461dd352f1c9ab1213e1a51e8f188e4aa61b41b02cde121be/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  3 16:27:14 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5354cb7424d4793461dd352f1c9ab1213e1a51e8f188e4aa61b41b02cde121be/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  3 16:27:14 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5354cb7424d4793461dd352f1c9ab1213e1a51e8f188e4aa61b41b02cde121be/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec  3 16:27:14 np0005544708 podman[241550]: 2025-12-03 21:27:14.149034467 +0000 UTC m=+0.133680682 container init 6df18d331f2ccf5e19017d42e753cb3908d6fceb137ed1521c71025bcc6b9255 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=nova_compute)
Dec  3 16:27:14 np0005544708 podman[241550]: 2025-12-03 21:27:14.161986185 +0000 UTC m=+0.146632410 container start 6df18d331f2ccf5e19017d42e753cb3908d6fceb137ed1521c71025bcc6b9255 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=edpm, container_name=nova_compute)
Dec  3 16:27:14 np0005544708 podman[241550]: nova_compute
Dec  3 16:27:14 np0005544708 nova_compute[241566]: + sudo -E kolla_set_configs
Dec  3 16:27:14 np0005544708 systemd[1]: Started nova_compute container.
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Validating config file
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Copying service configuration files
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Deleting /etc/ceph
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Creating directory /etc/ceph
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Setting permission for /etc/ceph
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Writing out command to execute
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  3 16:27:14 np0005544708 nova_compute[241566]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  3 16:27:14 np0005544708 nova_compute[241566]: ++ cat /run_command
Dec  3 16:27:14 np0005544708 nova_compute[241566]: + CMD=nova-compute
Dec  3 16:27:14 np0005544708 nova_compute[241566]: + ARGS=
Dec  3 16:27:14 np0005544708 nova_compute[241566]: + sudo kolla_copy_cacerts
Dec  3 16:27:14 np0005544708 nova_compute[241566]: + [[ ! -n '' ]]
Dec  3 16:27:14 np0005544708 nova_compute[241566]: + . kolla_extend_start
Dec  3 16:27:14 np0005544708 nova_compute[241566]: Running command: 'nova-compute'
Dec  3 16:27:14 np0005544708 nova_compute[241566]: + echo 'Running command: '\''nova-compute'\'''
Dec  3 16:27:14 np0005544708 nova_compute[241566]: + umask 0022
Dec  3 16:27:14 np0005544708 nova_compute[241566]: + exec nova-compute
Dec  3 16:27:14 np0005544708 podman[241572]: 2025-12-03 21:27:14.359484485 +0000 UTC m=+0.146419283 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  3 16:27:15 np0005544708 python3.9[241754]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec  3 16:27:15 np0005544708 systemd[1]: Started libpod-conmon-5f2a8f92af126201e4bcea7b6c8994833207632e395a767ac859e71af7660679.scope.
Dec  3 16:27:15 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:27:15 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b456d77c18c21606f98af99f9de0fc93246ff32666866c092acc1dee3c5deb7/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Dec  3 16:27:15 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b456d77c18c21606f98af99f9de0fc93246ff32666866c092acc1dee3c5deb7/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  3 16:27:15 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b456d77c18c21606f98af99f9de0fc93246ff32666866c092acc1dee3c5deb7/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Dec  3 16:27:15 np0005544708 podman[241781]: 2025-12-03 21:27:15.719112279 +0000 UTC m=+0.161616840 container init 5f2a8f92af126201e4bcea7b6c8994833207632e395a767ac859e71af7660679 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Dec  3 16:27:15 np0005544708 podman[241781]: 2025-12-03 21:27:15.729888648 +0000 UTC m=+0.172393179 container start 5f2a8f92af126201e4bcea7b6c8994833207632e395a767ac859e71af7660679 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:27:15 np0005544708 python3.9[241754]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Dec  3 16:27:15 np0005544708 nova_compute_init[241802]: INFO:nova_statedir:Applying nova statedir ownership
Dec  3 16:27:15 np0005544708 nova_compute_init[241802]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Dec  3 16:27:15 np0005544708 nova_compute_init[241802]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Dec  3 16:27:15 np0005544708 nova_compute_init[241802]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Dec  3 16:27:15 np0005544708 nova_compute_init[241802]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Dec  3 16:27:15 np0005544708 nova_compute_init[241802]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Dec  3 16:27:15 np0005544708 nova_compute_init[241802]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Dec  3 16:27:15 np0005544708 nova_compute_init[241802]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Dec  3 16:27:15 np0005544708 nova_compute_init[241802]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Dec  3 16:27:15 np0005544708 nova_compute_init[241802]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Dec  3 16:27:15 np0005544708 nova_compute_init[241802]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Dec  3 16:27:15 np0005544708 nova_compute_init[241802]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Dec  3 16:27:15 np0005544708 nova_compute_init[241802]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Dec  3 16:27:15 np0005544708 nova_compute_init[241802]: INFO:nova_statedir:Nova statedir ownership complete
Dec  3 16:27:15 np0005544708 systemd[1]: libpod-5f2a8f92af126201e4bcea7b6c8994833207632e395a767ac859e71af7660679.scope: Deactivated successfully.
Dec  3 16:27:15 np0005544708 podman[241803]: 2025-12-03 21:27:15.811446843 +0000 UTC m=+0.045539061 container died 5f2a8f92af126201e4bcea7b6c8994833207632e395a767ac859e71af7660679 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  3 16:27:15 np0005544708 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5f2a8f92af126201e4bcea7b6c8994833207632e395a767ac859e71af7660679-userdata-shm.mount: Deactivated successfully.
Dec  3 16:27:15 np0005544708 systemd[1]: var-lib-containers-storage-overlay-5b456d77c18c21606f98af99f9de0fc93246ff32666866c092acc1dee3c5deb7-merged.mount: Deactivated successfully.
Dec  3 16:27:15 np0005544708 podman[241814]: 2025-12-03 21:27:15.868533173 +0000 UTC m=+0.062550857 container cleanup 5f2a8f92af126201e4bcea7b6c8994833207632e395a767ac859e71af7660679 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 16:27:15 np0005544708 systemd[1]: libpod-conmon-5f2a8f92af126201e4bcea7b6c8994833207632e395a767ac859e71af7660679.scope: Deactivated successfully.
Dec  3 16:27:16 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v620: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:27:16 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:27:16 np0005544708 systemd[1]: session-49.scope: Deactivated successfully.
Dec  3 16:27:16 np0005544708 systemd[1]: session-49.scope: Consumed 2min 41.092s CPU time.
Dec  3 16:27:16 np0005544708 systemd-logind[787]: Session 49 logged out. Waiting for processes to exit.
Dec  3 16:27:16 np0005544708 nova_compute[241566]: 2025-12-03 21:27:16.390 241570 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  3 16:27:16 np0005544708 nova_compute[241566]: 2025-12-03 21:27:16.390 241570 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  3 16:27:16 np0005544708 nova_compute[241566]: 2025-12-03 21:27:16.391 241570 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  3 16:27:16 np0005544708 nova_compute[241566]: 2025-12-03 21:27:16.391 241570 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Dec  3 16:27:16 np0005544708 systemd-logind[787]: Removed session 49.
Dec  3 16:27:16 np0005544708 nova_compute[241566]: 2025-12-03 21:27:16.528 241570 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:27:16 np0005544708 nova_compute[241566]: 2025-12-03 21:27:16.540 241570 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:27:16 np0005544708 nova_compute[241566]: 2025-12-03 21:27:16.541 241570 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.006 241570 INFO nova.virt.driver [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.132 241570 INFO nova.compute.provider_config [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.151 241570 DEBUG oslo_concurrency.lockutils [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.151 241570 DEBUG oslo_concurrency.lockutils [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.152 241570 DEBUG oslo_concurrency.lockutils [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.152 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.152 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.152 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.153 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.153 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.153 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.153 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.153 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.154 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.154 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.154 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.154 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.154 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.155 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.155 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.155 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.155 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.155 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.156 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.156 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.156 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.156 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.156 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.157 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.157 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.157 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.157 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.157 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.158 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.158 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.158 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.158 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.158 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.159 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.159 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.159 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.159 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.159 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.160 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.160 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.160 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.160 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.160 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.161 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.161 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.161 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.161 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.161 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.162 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.162 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.162 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.162 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.162 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.163 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.163 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.163 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.163 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.163 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.163 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.164 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.164 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.164 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.164 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.164 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.165 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.165 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.165 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.165 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.165 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.165 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.166 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.166 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.166 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.166 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.166 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.166 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.167 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.167 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.167 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.167 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.167 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.168 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.168 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.168 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.168 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.168 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.169 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.169 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.169 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.169 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.169 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.169 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.170 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.170 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.170 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.170 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.170 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.171 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.171 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.171 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.171 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.171 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.171 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.172 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.172 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.172 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.172 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.172 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.173 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.173 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.173 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.173 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.173 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.173 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.174 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.174 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.174 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.174 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.174 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.175 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.175 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.175 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.175 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.175 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.175 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.176 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.176 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.176 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.176 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.176 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.177 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.177 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.177 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.177 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.177 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.177 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.177 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.177 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.178 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.178 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.178 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.178 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.178 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.178 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.178 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.179 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.179 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.179 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.179 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.179 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.179 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.179 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.179 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.180 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.180 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.180 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.180 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.180 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.180 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.181 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.181 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.181 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.181 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.181 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.181 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.181 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.181 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.182 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.182 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.182 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.182 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.182 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.182 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.182 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.183 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.183 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.183 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.183 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.183 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.183 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.183 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.183 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.184 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.184 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.184 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.184 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.184 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.184 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.184 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.185 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.185 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.185 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.185 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.185 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.185 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.185 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.186 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.186 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.186 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.186 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.186 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.186 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.186 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.186 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.187 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.187 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.187 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.187 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.187 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.187 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.187 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.188 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.188 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.188 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.188 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.188 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.188 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.188 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.188 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.189 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.189 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.189 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.189 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.189 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.189 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.189 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.190 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.190 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.190 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.190 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.190 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.190 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.190 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.190 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.191 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.191 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.191 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.191 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.191 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.191 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.191 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.192 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.192 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.192 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.192 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.192 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.192 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.192 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.192 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.193 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.193 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.193 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.193 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.193 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.193 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.193 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.194 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.194 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.194 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.194 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.194 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.194 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.194 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.194 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.195 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.195 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.195 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.195 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.195 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.195 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.195 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.196 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.196 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.196 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.196 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.196 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.196 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.196 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.197 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.197 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.197 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.197 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.197 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.197 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.197 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.197 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.198 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.198 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.198 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.198 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.198 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.198 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.198 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.199 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.199 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.199 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.199 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.199 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.199 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.199 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.199 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.200 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.200 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.200 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.200 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.200 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.200 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.200 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.201 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.201 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.201 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.201 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.201 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.201 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.201 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.201 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.202 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.202 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.202 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.202 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.202 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.202 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.202 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.203 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.203 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.203 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.203 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.203 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.203 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.203 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.204 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.204 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.204 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.204 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.204 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.204 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.204 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.204 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.205 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.205 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.205 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.205 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.205 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.205 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.205 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.206 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.206 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.206 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.206 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.206 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.206 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.206 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.206 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.207 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.207 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.207 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.207 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.207 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.207 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.208 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.208 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.208 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.208 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.208 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.208 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.208 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.209 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.209 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.209 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.209 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.209 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.209 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.209 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.209 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.210 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.210 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.210 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.210 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.210 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.210 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.210 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.211 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.211 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.211 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.211 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.211 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.211 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.212 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.212 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.212 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.212 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.212 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.212 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.212 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.213 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.213 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.213 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.213 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.213 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.213 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.213 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.213 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.214 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.214 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.214 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.214 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.214 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.214 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.214 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.215 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.215 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.215 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.215 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.215 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.215 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.215 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.215 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.216 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.216 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.216 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.216 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.216 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.216 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.216 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.217 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.217 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.217 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.217 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.217 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.217 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.217 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.217 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.218 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.218 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.218 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.218 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.218 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.218 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.218 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.219 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.219 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.219 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.219 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.219 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.219 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.219 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.220 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.220 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.220 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.220 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.220 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.220 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.220 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.221 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.221 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.221 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.221 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.221 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.221 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.221 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.221 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.222 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.222 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.222 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.222 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.222 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.222 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.223 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.223 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.223 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.223 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.223 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.223 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.223 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.224 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.224 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.224 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.224 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.224 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.224 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.224 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.225 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.225 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.225 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.225 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.226 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.226 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.226 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.226 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.226 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.226 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.227 241570 WARNING oslo_config.cfg [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec  3 16:27:17 np0005544708 nova_compute[241566]: live_migration_uri is deprecated for removal in favor of two other options that
Dec  3 16:27:17 np0005544708 nova_compute[241566]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec  3 16:27:17 np0005544708 nova_compute[241566]: and ``live_migration_inbound_addr`` respectively.
Dec  3 16:27:17 np0005544708 nova_compute[241566]: ).  Its value may be silently ignored in the future.#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.227 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.227 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.227 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.227 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.227 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.228 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.228 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.228 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.228 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.228 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.228 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.228 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.228 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.229 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.229 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.229 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.229 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.229 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.229 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.rbd_secret_uuid        = c21de27e-a7fd-594b-8324-0697ba9aab3a log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.229 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.230 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.230 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.230 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.230 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.230 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.230 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.230 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.231 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.231 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.231 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.231 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.231 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.231 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.231 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.232 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.232 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.232 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.232 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.232 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.232 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.232 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.233 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.233 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.233 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.233 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.233 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.233 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.233 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.233 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.234 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.234 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.234 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.234 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.234 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.234 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.234 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.235 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.235 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.235 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.235 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.235 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.235 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.235 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.235 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.236 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.236 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.236 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.236 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.236 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.236 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.236 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.237 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.237 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.237 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.237 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.237 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.237 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.237 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.238 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.238 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.238 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.238 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.238 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.238 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.238 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.238 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.239 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.239 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.239 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.239 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.239 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.239 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.239 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.240 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.240 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.240 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.240 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.240 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.240 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.240 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.240 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.241 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.241 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.241 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.241 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.241 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.241 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.241 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.242 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.242 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.242 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.242 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.242 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.242 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.242 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.242 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.243 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.243 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.243 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.243 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.243 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.243 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.243 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.243 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.244 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.244 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.244 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.244 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.244 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.244 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.244 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.245 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.245 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.245 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.245 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.245 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.245 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.245 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.245 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.246 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.246 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.246 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.246 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.246 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.246 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.247 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.247 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.247 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.247 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.247 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.247 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.247 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.248 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.248 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.248 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.248 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.248 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.248 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.248 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.249 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.249 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.249 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.249 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.249 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.249 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.249 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.249 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.250 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.250 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.250 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.250 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.250 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.250 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.250 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.251 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.251 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.251 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.251 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.251 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.251 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.251 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.252 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.252 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.252 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.252 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.252 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.252 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.252 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.253 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.253 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.253 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.253 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.253 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.253 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.253 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.253 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.254 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.254 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.254 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.254 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.254 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.254 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.254 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.255 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.255 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.255 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.255 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.255 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.255 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.256 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.256 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.256 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.256 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.256 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.256 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.256 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.257 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.257 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.257 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.257 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.257 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.257 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.257 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.258 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.258 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.258 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.258 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.258 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.258 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.258 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.259 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.259 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.259 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.259 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.259 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.259 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.259 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.259 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.260 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.260 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.260 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.260 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.260 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.260 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.261 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.261 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.261 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.261 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.261 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.261 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.262 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.262 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.262 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.262 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.262 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.262 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.262 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.263 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.263 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.263 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.263 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.263 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.263 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.263 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.264 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.264 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.264 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.264 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.264 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.264 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.264 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.265 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.265 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.265 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.265 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.265 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.265 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.265 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.266 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.266 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.266 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.266 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.266 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.266 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.266 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.267 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.267 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.267 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.267 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.267 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.267 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.268 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.268 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.268 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.268 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.268 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.268 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.268 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.269 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.269 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.269 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.269 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.269 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.269 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.269 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.270 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.270 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.270 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.270 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.270 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.270 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.271 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.271 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.271 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.271 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.271 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.271 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.272 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.272 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.272 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.272 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.272 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.272 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.273 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.273 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.273 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.273 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.273 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.273 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.273 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.274 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.274 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.274 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.274 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.274 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.274 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.275 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.275 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.275 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.275 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.275 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.275 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.275 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.276 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.276 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.276 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.276 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.276 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.276 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.276 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.277 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.277 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.277 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.277 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.277 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.277 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.277 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.277 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.278 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.278 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.278 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.278 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.278 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.278 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.278 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.279 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.279 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.279 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.279 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.279 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.279 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.279 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.280 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.280 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.280 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.280 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.280 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.280 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.280 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.281 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.281 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.281 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.281 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.281 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.281 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.281 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.282 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.282 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.282 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.282 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.282 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.282 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.282 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.283 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.283 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.283 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.283 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.283 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.283 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.283 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.284 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.284 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.284 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.284 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.284 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.284 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.285 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.285 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.285 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.285 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.285 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.285 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.285 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.285 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.286 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.286 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.286 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.286 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.286 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.286 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.286 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.287 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.287 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.287 241570 DEBUG oslo_service.service [None req-4eb91c96-5f1b-46df-abf8-c2655c7f26a2 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.288 241570 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.306 241570 INFO nova.virt.node [None req-3e600e4f-6e74-4ea2-8332-f744551ec151 - - - - - -] Determined node identity 94aba67c-5c5e-45d0-83d1-33eb467c8775 from /var/lib/nova/compute_id#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.307 241570 DEBUG nova.virt.libvirt.host [None req-3e600e4f-6e74-4ea2-8332-f744551ec151 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.307 241570 DEBUG nova.virt.libvirt.host [None req-3e600e4f-6e74-4ea2-8332-f744551ec151 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.308 241570 DEBUG nova.virt.libvirt.host [None req-3e600e4f-6e74-4ea2-8332-f744551ec151 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.308 241570 DEBUG nova.virt.libvirt.host [None req-3e600e4f-6e74-4ea2-8332-f744551ec151 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.318 241570 DEBUG nova.virt.libvirt.host [None req-3e600e4f-6e74-4ea2-8332-f744551ec151 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f3da91c41c0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.320 241570 DEBUG nova.virt.libvirt.host [None req-3e600e4f-6e74-4ea2-8332-f744551ec151 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f3da91c41c0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.320 241570 INFO nova.virt.libvirt.driver [None req-3e600e4f-6e74-4ea2-8332-f744551ec151 - - - - - -] Connection event '1' reason 'None'#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.325 241570 INFO nova.virt.libvirt.host [None req-3e600e4f-6e74-4ea2-8332-f744551ec151 - - - - - -] Libvirt host capabilities <capabilities>
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <host>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <uuid>fe808748-0a27-4a3c-9875-a9777da5fa17</uuid>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <cpu>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <arch>x86_64</arch>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model>EPYC-Rome-v4</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <vendor>AMD</vendor>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <microcode version='16777317'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <signature family='23' model='49' stepping='0'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <maxphysaddr mode='emulate' bits='40'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature name='x2apic'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature name='tsc-deadline'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature name='osxsave'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature name='hypervisor'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature name='tsc_adjust'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature name='spec-ctrl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature name='stibp'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature name='arch-capabilities'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature name='ssbd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature name='cmp_legacy'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature name='topoext'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature name='virt-ssbd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature name='lbrv'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature name='tsc-scale'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature name='vmcb-clean'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature name='pause-filter'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature name='pfthreshold'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature name='svme-addr-chk'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature name='rdctl-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature name='skip-l1dfl-vmentry'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature name='mds-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature name='pschange-mc-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <pages unit='KiB' size='4'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <pages unit='KiB' size='2048'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <pages unit='KiB' size='1048576'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </cpu>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <power_management>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <suspend_mem/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </power_management>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <iommu support='no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <migration_features>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <live/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <uri_transports>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <uri_transport>tcp</uri_transport>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <uri_transport>rdma</uri_transport>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </uri_transports>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </migration_features>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <topology>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <cells num='1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <cell id='0'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:          <memory unit='KiB'>7864316</memory>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:          <pages unit='KiB' size='4'>1966079</pages>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:          <pages unit='KiB' size='2048'>0</pages>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:          <pages unit='KiB' size='1048576'>0</pages>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:          <distances>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:            <sibling id='0' value='10'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:          </distances>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:          <cpus num='8'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:          </cpus>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        </cell>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </cells>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </topology>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <cache>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </cache>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <secmodel>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model>selinux</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <doi>0</doi>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </secmodel>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <secmodel>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model>dac</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <doi>0</doi>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <baselabel type='kvm'>+107:+107</baselabel>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <baselabel type='qemu'>+107:+107</baselabel>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </secmodel>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  </host>
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <guest>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <os_type>hvm</os_type>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <arch name='i686'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <wordsize>32</wordsize>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <domain type='qemu'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <domain type='kvm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </arch>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <features>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <pae/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <nonpae/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <acpi default='on' toggle='yes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <apic default='on' toggle='no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <cpuselection/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <deviceboot/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <disksnapshot default='on' toggle='no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <externalSnapshot/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </features>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  </guest>
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <guest>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <os_type>hvm</os_type>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <arch name='x86_64'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <wordsize>64</wordsize>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <domain type='qemu'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <domain type='kvm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </arch>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <features>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <acpi default='on' toggle='yes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <apic default='on' toggle='no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <cpuselection/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <deviceboot/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <disksnapshot default='on' toggle='no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <externalSnapshot/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </features>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  </guest>
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 
Dec  3 16:27:17 np0005544708 nova_compute[241566]: </capabilities>
Dec  3 16:27:17 np0005544708 nova_compute[241566]: #033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.336 241570 DEBUG nova.virt.libvirt.volume.mount [None req-3e600e4f-6e74-4ea2-8332-f744551ec151 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.337 241570 DEBUG nova.virt.libvirt.host [None req-3e600e4f-6e74-4ea2-8332-f744551ec151 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.340 241570 DEBUG nova.virt.libvirt.host [None req-3e600e4f-6e74-4ea2-8332-f744551ec151 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec  3 16:27:17 np0005544708 nova_compute[241566]: <domainCapabilities>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <path>/usr/libexec/qemu-kvm</path>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <domain>kvm</domain>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <arch>i686</arch>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <vcpu max='4096'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <iothreads supported='yes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <os supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <enum name='firmware'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <loader supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='type'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>rom</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>pflash</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='readonly'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>yes</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>no</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='secure'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>no</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </loader>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  </os>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <cpu>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <mode name='host-passthrough' supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='hostPassthroughMigratable'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>on</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>off</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </mode>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <mode name='maximum' supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='maximumMigratable'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>on</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>off</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </mode>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <mode name='host-model' supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <vendor>AMD</vendor>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='x2apic'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='tsc-deadline'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='hypervisor'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='tsc_adjust'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='spec-ctrl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='stibp'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='ssbd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='cmp_legacy'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='overflow-recov'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='succor'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='ibrs'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='amd-ssbd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='virt-ssbd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='lbrv'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='tsc-scale'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='vmcb-clean'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='flushbyasid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='pause-filter'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='pfthreshold'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='svme-addr-chk'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='disable' name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </mode>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <mode name='custom' supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Broadwell'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Broadwell-IBRS'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Broadwell-noTSX'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Broadwell-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Broadwell-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Broadwell-v3'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Broadwell-v4'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Cascadelake-Server'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Cascadelake-Server-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Cascadelake-Server-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Cascadelake-Server-v3'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Cascadelake-Server-v4'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Cascadelake-Server-v5'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Cooperlake'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Cooperlake-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Cooperlake-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Denverton'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='mpx'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Denverton-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='mpx'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Denverton-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Denverton-v3'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Dhyana-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='EPYC-Genoa'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amd-psfd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='auto-ibrs'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='no-nested-data-bp'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='null-sel-clr-base'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='stibp-always-on'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='EPYC-Genoa-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amd-psfd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='auto-ibrs'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='no-nested-data-bp'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='null-sel-clr-base'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='stibp-always-on'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='EPYC-Milan'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='EPYC-Milan-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='EPYC-Milan-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amd-psfd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='no-nested-data-bp'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='null-sel-clr-base'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='stibp-always-on'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='EPYC-Rome'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='EPYC-Rome-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='EPYC-Rome-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='EPYC-Rome-v3'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='EPYC-v3'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='EPYC-v4'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='GraniteRapids'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-fp16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-int8'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-tile'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-fp16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fbsdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrc'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrs'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fzrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='mcdt-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pbrsb-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='prefetchiti'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='psdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='serialize'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xfd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='GraniteRapids-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-fp16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-int8'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-tile'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-fp16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fbsdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrc'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrs'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fzrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='mcdt-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pbrsb-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='prefetchiti'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='psdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='serialize'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xfd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='GraniteRapids-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-fp16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-int8'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-tile'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx10'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx10-128'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx10-256'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx10-512'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-fp16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='cldemote'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fbsdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrc'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrs'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fzrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='mcdt-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdir64b'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdiri'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pbrsb-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='prefetchiti'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='psdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='serialize'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ss'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xfd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Haswell'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Haswell-IBRS'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Haswell-noTSX'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Haswell-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Haswell-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Haswell-v3'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Haswell-v4'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Icelake-Server'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Icelake-Server-noTSX'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Icelake-Server-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Icelake-Server-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Icelake-Server-v3'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Icelake-Server-v4'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Icelake-Server-v5'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Icelake-Server-v6'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Icelake-Server-v7'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='IvyBridge'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='IvyBridge-IBRS'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='IvyBridge-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='IvyBridge-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='KnightsMill'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-4fmaps'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-4vnniw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512er'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512pf'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ss'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='KnightsMill-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-4fmaps'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-4vnniw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512er'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512pf'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ss'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Opteron_G4'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fma4'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xop'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Opteron_G4-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fma4'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xop'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Opteron_G5'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fma4'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='tbm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xop'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Opteron_G5-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fma4'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='tbm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xop'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='SapphireRapids'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-int8'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-tile'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-fp16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrc'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrs'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fzrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='serialize'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xfd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='SapphireRapids-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-int8'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-tile'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-fp16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrc'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrs'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fzrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='serialize'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xfd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='SapphireRapids-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-int8'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-tile'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-fp16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fbsdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrc'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrs'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fzrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='psdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='serialize'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xfd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='SapphireRapids-v3'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-int8'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-tile'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-fp16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='cldemote'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fbsdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrc'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrs'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fzrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdir64b'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdiri'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='psdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='serialize'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ss'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xfd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='SierraForest'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-ne-convert'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-vnni-int8'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='cmpccxadd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fbsdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrs'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='mcdt-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pbrsb-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='psdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='serialize'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='SierraForest-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-ne-convert'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-vnni-int8'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='cmpccxadd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fbsdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrs'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='mcdt-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pbrsb-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='psdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='serialize'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Client'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Client-IBRS'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Client-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Client-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Client-v3'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Client-v4'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Server'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Server-IBRS'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Server-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Server-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Server-v3'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Server-v4'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Server-v5'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Snowridge'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='cldemote'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='core-capability'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdir64b'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdiri'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='mpx'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='split-lock-detect'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Snowridge-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='cldemote'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='core-capability'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdir64b'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdiri'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='mpx'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='split-lock-detect'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Snowridge-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='cldemote'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='core-capability'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdir64b'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdiri'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='split-lock-detect'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Snowridge-v3'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='cldemote'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='core-capability'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdir64b'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdiri'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='split-lock-detect'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Snowridge-v4'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='cldemote'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdir64b'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdiri'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='athlon'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='3dnow'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='3dnowext'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='athlon-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='3dnow'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='3dnowext'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='core2duo'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ss'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='core2duo-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ss'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='coreduo'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ss'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='coreduo-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ss'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='n270'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ss'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='n270-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ss'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='phenom'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='3dnow'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='3dnowext'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='phenom-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='3dnow'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='3dnowext'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </mode>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  </cpu>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <memoryBacking supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <enum name='sourceType'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <value>file</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <value>anonymous</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <value>memfd</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  </memoryBacking>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <devices>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <disk supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='diskDevice'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>disk</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>cdrom</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>floppy</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>lun</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='bus'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>fdc</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>scsi</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>virtio</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>usb</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>sata</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='model'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>virtio</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>virtio-transitional</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>virtio-non-transitional</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </disk>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <graphics supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='type'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>vnc</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>egl-headless</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>dbus</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </graphics>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <video supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='modelType'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>vga</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>cirrus</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>virtio</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>none</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>bochs</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>ramfb</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </video>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <hostdev supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='mode'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>subsystem</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='startupPolicy'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>default</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>mandatory</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>requisite</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>optional</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='subsysType'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>usb</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>pci</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>scsi</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='capsType'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='pciBackend'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </hostdev>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <rng supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='model'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>virtio</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>virtio-transitional</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>virtio-non-transitional</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='backendModel'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>random</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>egd</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>builtin</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </rng>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <filesystem supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='driverType'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>path</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>handle</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>virtiofs</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </filesystem>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <tpm supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='model'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>tpm-tis</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>tpm-crb</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='backendModel'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>emulator</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>external</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='backendVersion'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>2.0</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </tpm>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <redirdev supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='bus'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>usb</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </redirdev>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <channel supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='type'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>pty</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>unix</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </channel>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <crypto supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='model'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='type'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>qemu</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='backendModel'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>builtin</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </crypto>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <interface supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='backendType'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>default</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>passt</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </interface>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <panic supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='model'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>isa</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>hyperv</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </panic>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <console supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='type'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>null</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>vc</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>pty</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>dev</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>file</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>pipe</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>stdio</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>udp</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>tcp</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>unix</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>qemu-vdagent</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>dbus</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </console>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  </devices>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <features>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <gic supported='no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <vmcoreinfo supported='yes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <genid supported='yes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <backingStoreInput supported='yes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <backup supported='yes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <async-teardown supported='yes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <ps2 supported='yes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <sev supported='no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <sgx supported='no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <hyperv supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='features'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>relaxed</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>vapic</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>spinlocks</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>vpindex</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>runtime</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>synic</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>stimer</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>reset</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>vendor_id</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>frequencies</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>reenlightenment</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>tlbflush</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>ipi</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>avic</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>emsr_bitmap</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>xmm_input</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <defaults>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <spinlocks>4095</spinlocks>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <stimer_direct>on</stimer_direct>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <tlbflush_direct>on</tlbflush_direct>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <tlbflush_extended>on</tlbflush_extended>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </defaults>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </hyperv>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <launchSecurity supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='sectype'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>tdx</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </launchSecurity>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  </features>
Dec  3 16:27:17 np0005544708 nova_compute[241566]: </domainCapabilities>
Dec  3 16:27:17 np0005544708 nova_compute[241566]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.345 241570 DEBUG nova.virt.libvirt.host [None req-3e600e4f-6e74-4ea2-8332-f744551ec151 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec  3 16:27:17 np0005544708 nova_compute[241566]: <domainCapabilities>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <path>/usr/libexec/qemu-kvm</path>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <domain>kvm</domain>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <arch>i686</arch>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <vcpu max='240'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <iothreads supported='yes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <os supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <enum name='firmware'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <loader supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='type'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>rom</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>pflash</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='readonly'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>yes</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>no</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='secure'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>no</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </loader>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  </os>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <cpu>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <mode name='host-passthrough' supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='hostPassthroughMigratable'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>on</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>off</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </mode>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <mode name='maximum' supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='maximumMigratable'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>on</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>off</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </mode>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <mode name='host-model' supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <vendor>AMD</vendor>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='x2apic'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='tsc-deadline'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='hypervisor'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='tsc_adjust'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='spec-ctrl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='stibp'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='ssbd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='cmp_legacy'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='overflow-recov'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='succor'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='ibrs'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='amd-ssbd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='virt-ssbd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='lbrv'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='tsc-scale'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='vmcb-clean'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='flushbyasid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='pause-filter'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='pfthreshold'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='svme-addr-chk'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='disable' name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </mode>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <mode name='custom' supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Broadwell'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Broadwell-IBRS'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Broadwell-noTSX'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Broadwell-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Broadwell-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Broadwell-v3'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Broadwell-v4'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Cascadelake-Server'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Cascadelake-Server-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Cascadelake-Server-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Cascadelake-Server-v3'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Cascadelake-Server-v4'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Cascadelake-Server-v5'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Cooperlake'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Cooperlake-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Cooperlake-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Denverton'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='mpx'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Denverton-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='mpx'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Denverton-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Denverton-v3'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Dhyana-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='EPYC-Genoa'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amd-psfd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='auto-ibrs'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='no-nested-data-bp'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='null-sel-clr-base'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='stibp-always-on'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='EPYC-Genoa-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amd-psfd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='auto-ibrs'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='no-nested-data-bp'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='null-sel-clr-base'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='stibp-always-on'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='EPYC-Milan'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='EPYC-Milan-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='EPYC-Milan-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amd-psfd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='no-nested-data-bp'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='null-sel-clr-base'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='stibp-always-on'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='EPYC-Rome'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='EPYC-Rome-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='EPYC-Rome-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='EPYC-Rome-v3'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='EPYC-v3'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='EPYC-v4'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='GraniteRapids'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-fp16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-int8'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-tile'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-fp16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fbsdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrc'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrs'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fzrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='mcdt-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pbrsb-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='prefetchiti'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='psdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='serialize'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xfd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='GraniteRapids-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-fp16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-int8'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-tile'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-fp16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fbsdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrc'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrs'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fzrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='mcdt-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pbrsb-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='prefetchiti'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='psdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='serialize'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xfd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='GraniteRapids-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-fp16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-int8'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-tile'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx10'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx10-128'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx10-256'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx10-512'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-fp16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='cldemote'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fbsdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrc'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrs'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fzrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='mcdt-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdir64b'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdiri'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pbrsb-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='prefetchiti'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='psdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='serialize'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ss'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xfd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Haswell'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Haswell-IBRS'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Haswell-noTSX'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Haswell-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Haswell-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Haswell-v3'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Haswell-v4'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Icelake-Server'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Icelake-Server-noTSX'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Icelake-Server-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Icelake-Server-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Icelake-Server-v3'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Icelake-Server-v4'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Icelake-Server-v5'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Icelake-Server-v6'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Icelake-Server-v7'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='IvyBridge'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='IvyBridge-IBRS'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='IvyBridge-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='IvyBridge-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='KnightsMill'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-4fmaps'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-4vnniw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512er'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512pf'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ss'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='KnightsMill-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-4fmaps'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-4vnniw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512er'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512pf'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ss'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Opteron_G4'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fma4'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xop'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Opteron_G4-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fma4'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xop'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Opteron_G5'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fma4'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='tbm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xop'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Opteron_G5-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fma4'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='tbm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xop'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='SapphireRapids'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-int8'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-tile'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-fp16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrc'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrs'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fzrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='serialize'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xfd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='SapphireRapids-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-int8'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-tile'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-fp16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrc'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrs'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fzrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='serialize'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xfd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='SapphireRapids-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-int8'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-tile'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-fp16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fbsdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrc'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrs'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fzrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='psdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='serialize'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xfd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='SapphireRapids-v3'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-int8'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-tile'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-fp16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='cldemote'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fbsdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrc'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrs'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fzrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdir64b'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdiri'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='psdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='serialize'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ss'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xfd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='SierraForest'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-ne-convert'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-vnni-int8'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='cmpccxadd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fbsdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrs'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='mcdt-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pbrsb-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='psdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='serialize'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='SierraForest-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-ne-convert'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-vnni-int8'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='cmpccxadd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fbsdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrs'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='mcdt-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pbrsb-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='psdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='serialize'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Client'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Client-IBRS'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Client-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Client-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Client-v3'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Client-v4'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Server'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Server-IBRS'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Server-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Server-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Server-v3'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Server-v4'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Server-v5'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Snowridge'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='cldemote'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='core-capability'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdir64b'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdiri'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='mpx'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='split-lock-detect'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Snowridge-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='cldemote'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='core-capability'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdir64b'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdiri'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='mpx'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='split-lock-detect'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Snowridge-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='cldemote'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='core-capability'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdir64b'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdiri'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='split-lock-detect'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Snowridge-v3'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='cldemote'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='core-capability'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdir64b'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdiri'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='split-lock-detect'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Snowridge-v4'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='cldemote'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdir64b'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdiri'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='athlon'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='3dnow'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='3dnowext'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='athlon-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='3dnow'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='3dnowext'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='core2duo'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ss'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='core2duo-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ss'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='coreduo'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ss'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='coreduo-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ss'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='n270'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ss'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='n270-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ss'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='phenom'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='3dnow'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='3dnowext'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='phenom-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='3dnow'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='3dnowext'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </mode>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  </cpu>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <memoryBacking supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <enum name='sourceType'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <value>file</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <value>anonymous</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <value>memfd</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  </memoryBacking>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <devices>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <disk supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='diskDevice'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>disk</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>cdrom</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>floppy</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>lun</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='bus'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>ide</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>fdc</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>scsi</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>virtio</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>usb</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>sata</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='model'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>virtio</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>virtio-transitional</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>virtio-non-transitional</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </disk>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <graphics supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='type'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>vnc</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>egl-headless</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>dbus</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </graphics>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <video supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='modelType'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>vga</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>cirrus</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>virtio</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>none</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>bochs</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>ramfb</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </video>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <hostdev supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='mode'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>subsystem</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='startupPolicy'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>default</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>mandatory</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>requisite</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>optional</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='subsysType'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>usb</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>pci</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>scsi</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='capsType'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='pciBackend'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </hostdev>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <rng supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='model'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>virtio</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>virtio-transitional</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>virtio-non-transitional</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='backendModel'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>random</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>egd</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>builtin</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </rng>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <filesystem supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='driverType'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>path</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>handle</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>virtiofs</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </filesystem>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <tpm supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='model'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>tpm-tis</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>tpm-crb</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='backendModel'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>emulator</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>external</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='backendVersion'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>2.0</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </tpm>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <redirdev supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='bus'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>usb</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </redirdev>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <channel supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='type'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>pty</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>unix</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </channel>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <crypto supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='model'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='type'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>qemu</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='backendModel'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>builtin</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </crypto>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <interface supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='backendType'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>default</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>passt</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </interface>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <panic supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='model'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>isa</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>hyperv</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </panic>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <console supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='type'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>null</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>vc</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>pty</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>dev</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>file</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>pipe</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>stdio</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>udp</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>tcp</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>unix</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>qemu-vdagent</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>dbus</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </console>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  </devices>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <features>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <gic supported='no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <vmcoreinfo supported='yes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <genid supported='yes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <backingStoreInput supported='yes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <backup supported='yes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <async-teardown supported='yes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <ps2 supported='yes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <sev supported='no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <sgx supported='no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <hyperv supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='features'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>relaxed</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>vapic</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>spinlocks</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>vpindex</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>runtime</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>synic</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>stimer</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>reset</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>vendor_id</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>frequencies</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>reenlightenment</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>tlbflush</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>ipi</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>avic</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>emsr_bitmap</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>xmm_input</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <defaults>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <spinlocks>4095</spinlocks>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <stimer_direct>on</stimer_direct>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <tlbflush_direct>on</tlbflush_direct>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <tlbflush_extended>on</tlbflush_extended>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </defaults>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </hyperv>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <launchSecurity supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='sectype'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>tdx</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </launchSecurity>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  </features>
Dec  3 16:27:17 np0005544708 nova_compute[241566]: </domainCapabilities>
Dec  3 16:27:17 np0005544708 nova_compute[241566]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.373 241570 DEBUG nova.virt.libvirt.host [None req-3e600e4f-6e74-4ea2-8332-f744551ec151 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.378 241570 DEBUG nova.virt.libvirt.host [None req-3e600e4f-6e74-4ea2-8332-f744551ec151 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec  3 16:27:17 np0005544708 nova_compute[241566]: <domainCapabilities>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <path>/usr/libexec/qemu-kvm</path>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <domain>kvm</domain>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <arch>x86_64</arch>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <vcpu max='4096'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <iothreads supported='yes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <os supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <enum name='firmware'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <value>efi</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <loader supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='type'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>rom</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>pflash</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='readonly'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>yes</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>no</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='secure'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>yes</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>no</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </loader>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  </os>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <cpu>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <mode name='host-passthrough' supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='hostPassthroughMigratable'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>on</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>off</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </mode>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <mode name='maximum' supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='maximumMigratable'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>on</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>off</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </mode>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <mode name='host-model' supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <vendor>AMD</vendor>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='x2apic'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='tsc-deadline'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='hypervisor'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='tsc_adjust'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='spec-ctrl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='stibp'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='ssbd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='cmp_legacy'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='overflow-recov'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='succor'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='ibrs'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='amd-ssbd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='virt-ssbd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='lbrv'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='tsc-scale'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='vmcb-clean'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='flushbyasid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='pause-filter'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='pfthreshold'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='svme-addr-chk'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='disable' name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </mode>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <mode name='custom' supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Broadwell'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Broadwell-IBRS'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Broadwell-noTSX'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Broadwell-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Broadwell-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Broadwell-v3'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Broadwell-v4'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Cascadelake-Server'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Cascadelake-Server-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Cascadelake-Server-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Cascadelake-Server-v3'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Cascadelake-Server-v4'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Cascadelake-Server-v5'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Cooperlake'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Cooperlake-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Cooperlake-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Denverton'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='mpx'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Denverton-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='mpx'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Denverton-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Denverton-v3'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Dhyana-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='EPYC-Genoa'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amd-psfd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='auto-ibrs'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='no-nested-data-bp'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='null-sel-clr-base'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='stibp-always-on'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='EPYC-Genoa-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amd-psfd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='auto-ibrs'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='no-nested-data-bp'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='null-sel-clr-base'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='stibp-always-on'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='EPYC-Milan'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='EPYC-Milan-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='EPYC-Milan-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amd-psfd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='no-nested-data-bp'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='null-sel-clr-base'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='stibp-always-on'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='EPYC-Rome'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='EPYC-Rome-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='EPYC-Rome-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='EPYC-Rome-v3'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='EPYC-v3'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='EPYC-v4'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='GraniteRapids'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-fp16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-int8'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-tile'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-fp16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fbsdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrc'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrs'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fzrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='mcdt-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pbrsb-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='prefetchiti'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='psdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='serialize'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xfd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='GraniteRapids-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-fp16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-int8'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-tile'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-fp16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fbsdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrc'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrs'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fzrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='mcdt-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pbrsb-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='prefetchiti'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='psdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='serialize'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xfd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='GraniteRapids-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-fp16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-int8'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-tile'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx10'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx10-128'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx10-256'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx10-512'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-fp16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='cldemote'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fbsdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrc'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrs'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fzrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='mcdt-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdir64b'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdiri'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pbrsb-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='prefetchiti'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='psdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='serialize'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ss'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xfd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Haswell'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Haswell-IBRS'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Haswell-noTSX'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Haswell-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Haswell-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Haswell-v3'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Haswell-v4'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Icelake-Server'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Icelake-Server-noTSX'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Icelake-Server-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Icelake-Server-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Icelake-Server-v3'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Icelake-Server-v4'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Icelake-Server-v5'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Icelake-Server-v6'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Icelake-Server-v7'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='IvyBridge'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='IvyBridge-IBRS'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='IvyBridge-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='IvyBridge-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='KnightsMill'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-4fmaps'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-4vnniw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512er'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512pf'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ss'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='KnightsMill-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-4fmaps'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-4vnniw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512er'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512pf'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ss'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Opteron_G4'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fma4'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xop'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Opteron_G4-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fma4'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xop'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Opteron_G5'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fma4'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='tbm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xop'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Opteron_G5-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fma4'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='tbm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xop'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='SapphireRapids'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-int8'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-tile'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-fp16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrc'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrs'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fzrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='serialize'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xfd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='SapphireRapids-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-int8'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-tile'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-fp16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrc'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrs'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fzrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='serialize'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xfd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='SapphireRapids-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-int8'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-tile'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-fp16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fbsdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrc'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrs'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fzrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='psdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='serialize'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xfd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='SapphireRapids-v3'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-int8'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='amx-tile'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-fp16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-vpopcntdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bitalg'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vbmi2'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='cldemote'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fbsdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrc'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrs'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fzrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='la57'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdir64b'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdiri'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='psdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='serialize'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ss'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='tsx-ldtrk'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xfd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='SierraForest'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-ne-convert'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-vnni-int8'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='cmpccxadd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fbsdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrs'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='mcdt-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pbrsb-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='psdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='serialize'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='SierraForest-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-ifma'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-ne-convert'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx-vnni-int8'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='bus-lock-detect'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='cmpccxadd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fbsdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='fsrs'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='mcdt-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pbrsb-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='psdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='sbdr-ssdp-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='serialize'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vaes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='vpclmulqdq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Client'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Client-IBRS'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Client-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Client-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Client-v3'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Client-v4'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Server'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Server-IBRS'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Server-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Server-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Server-v3'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Server-v4'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Skylake-Server-v5'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Snowridge'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='cldemote'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='core-capability'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdir64b'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdiri'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='mpx'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='split-lock-detect'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Snowridge-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='cldemote'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='core-capability'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdir64b'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdiri'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='mpx'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='split-lock-detect'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Snowridge-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='cldemote'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='core-capability'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdir64b'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdiri'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='split-lock-detect'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Snowridge-v3'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='cldemote'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='core-capability'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdir64b'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdiri'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='split-lock-detect'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Snowridge-v4'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='cldemote'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='gfni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdir64b'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='movdiri'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='athlon'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='3dnow'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='3dnowext'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='athlon-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='3dnow'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='3dnowext'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='core2duo'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ss'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='core2duo-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ss'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='coreduo'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ss'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='coreduo-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ss'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='n270'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ss'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='n270-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ss'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='phenom'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='3dnow'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='3dnowext'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='phenom-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='3dnow'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='3dnowext'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </mode>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  </cpu>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <memoryBacking supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <enum name='sourceType'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <value>file</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <value>anonymous</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <value>memfd</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  </memoryBacking>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <devices>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <disk supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='diskDevice'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>disk</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>cdrom</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>floppy</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>lun</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='bus'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>fdc</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>scsi</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>virtio</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>usb</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>sata</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='model'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>virtio</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>virtio-transitional</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>virtio-non-transitional</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </disk>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <graphics supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='type'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>vnc</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>egl-headless</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>dbus</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </graphics>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <video supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='modelType'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>vga</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>cirrus</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>virtio</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>none</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>bochs</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>ramfb</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </video>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <hostdev supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='mode'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>subsystem</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='startupPolicy'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>default</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>mandatory</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>requisite</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>optional</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='subsysType'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>usb</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>pci</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>scsi</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='capsType'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='pciBackend'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </hostdev>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <rng supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='model'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>virtio</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>virtio-transitional</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>virtio-non-transitional</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='backendModel'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>random</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>egd</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>builtin</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </rng>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <filesystem supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='driverType'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>path</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>handle</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>virtiofs</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </filesystem>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <tpm supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='model'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>tpm-tis</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>tpm-crb</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='backendModel'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>emulator</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>external</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='backendVersion'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>2.0</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </tpm>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <redirdev supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='bus'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>usb</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </redirdev>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <channel supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='type'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>pty</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>unix</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </channel>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <crypto supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='model'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='type'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>qemu</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='backendModel'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>builtin</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </crypto>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <interface supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='backendType'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>default</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>passt</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </interface>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <panic supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='model'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>isa</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>hyperv</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </panic>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <console supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='type'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>null</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>vc</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>pty</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>dev</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>file</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>pipe</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>stdio</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>udp</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>tcp</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>unix</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>qemu-vdagent</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>dbus</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </console>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  </devices>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <features>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <gic supported='no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <vmcoreinfo supported='yes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <genid supported='yes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <backingStoreInput supported='yes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <backup supported='yes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <async-teardown supported='yes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <ps2 supported='yes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <sev supported='no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <sgx supported='no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <hyperv supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='features'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>relaxed</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>vapic</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>spinlocks</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>vpindex</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>runtime</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>synic</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>stimer</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>reset</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>vendor_id</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>frequencies</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>reenlightenment</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>tlbflush</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>ipi</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>avic</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>emsr_bitmap</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>xmm_input</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <defaults>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <spinlocks>4095</spinlocks>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <stimer_direct>on</stimer_direct>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <tlbflush_direct>on</tlbflush_direct>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <tlbflush_extended>on</tlbflush_extended>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </defaults>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </hyperv>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <launchSecurity supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='sectype'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>tdx</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </launchSecurity>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  </features>
Dec  3 16:27:17 np0005544708 nova_compute[241566]: </domainCapabilities>
Dec  3 16:27:17 np0005544708 nova_compute[241566]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  3 16:27:17 np0005544708 nova_compute[241566]: 2025-12-03 21:27:17.442 241570 DEBUG nova.virt.libvirt.host [None req-3e600e4f-6e74-4ea2-8332-f744551ec151 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec  3 16:27:17 np0005544708 nova_compute[241566]: <domainCapabilities>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <path>/usr/libexec/qemu-kvm</path>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <domain>kvm</domain>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <arch>x86_64</arch>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <vcpu max='240'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <iothreads supported='yes'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <os supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <enum name='firmware'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <loader supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='type'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>rom</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>pflash</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='readonly'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>yes</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>no</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='secure'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>no</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </loader>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  </os>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:  <cpu>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <mode name='host-passthrough' supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='hostPassthroughMigratable'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>on</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>off</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </mode>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <mode name='maximum' supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <enum name='maximumMigratable'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>on</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <value>off</value>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </enum>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </mode>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <mode name='host-model' supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <vendor>AMD</vendor>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='x2apic'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='tsc-deadline'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='hypervisor'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='tsc_adjust'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='spec-ctrl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='stibp'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='ssbd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='cmp_legacy'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='overflow-recov'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='succor'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='ibrs'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='amd-ssbd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='virt-ssbd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='lbrv'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='tsc-scale'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='vmcb-clean'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='flushbyasid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='pause-filter'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='pfthreshold'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='svme-addr-chk'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <feature policy='disable' name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    </mode>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:    <mode name='custom' supported='yes'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Broadwell'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Broadwell-IBRS'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Broadwell-noTSX'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Broadwell-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Broadwell-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Broadwell-v3'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Broadwell-v4'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Cascadelake-Server'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Cascadelake-Server-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Cascadelake-Server-v2'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Cascadelake-Server-v3'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Cascadelake-Server-v4'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Cascadelake-Server-v5'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='xsaves'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Cooperlake'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='ibrs-all'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='invpcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pcid'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='pku'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='rtm'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='taa-no'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      </blockers>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:      <blockers model='Cooperlake-v1'>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512-bf16'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512bw'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512cd'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512dq'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512f'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vl'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='avx512vnni'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='erms'/>
Dec  3 16:27:17 np0005544708 nova_compute[241566]:        <feature name='hle'/>
Dec  3 16:27:53 np0005544708 podman[242503]: 2025-12-03 21:27:53.863750895 +0000 UTC m=+0.082020769 container create c98d8b29547b82ad31345e2f04d599c48c0e2d46cc55814a8273e9c06cad9d31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:27:53 np0005544708 systemd[1]: Started libpod-conmon-c98d8b29547b82ad31345e2f04d599c48c0e2d46cc55814a8273e9c06cad9d31.scope.
Dec  3 16:27:53 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:27:53 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eff1dfaa65ef2601c3b820557f9c1f06a80c6a503e0711de9a44b1c23bb5e571/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:27:53 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eff1dfaa65ef2601c3b820557f9c1f06a80c6a503e0711de9a44b1c23bb5e571/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:27:53 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eff1dfaa65ef2601c3b820557f9c1f06a80c6a503e0711de9a44b1c23bb5e571/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:27:53 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eff1dfaa65ef2601c3b820557f9c1f06a80c6a503e0711de9a44b1c23bb5e571/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:27:53 np0005544708 podman[242503]: 2025-12-03 21:27:53.843539094 +0000 UTC m=+0.061808978 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:27:53 np0005544708 podman[242503]: 2025-12-03 21:27:53.958983316 +0000 UTC m=+0.177253260 container init c98d8b29547b82ad31345e2f04d599c48c0e2d46cc55814a8273e9c06cad9d31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_murdock, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec  3 16:27:53 np0005544708 podman[242503]: 2025-12-03 21:27:53.972719264 +0000 UTC m=+0.190989108 container start c98d8b29547b82ad31345e2f04d599c48c0e2d46cc55814a8273e9c06cad9d31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_murdock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:27:53 np0005544708 podman[242503]: 2025-12-03 21:27:53.976125075 +0000 UTC m=+0.194395009 container attach c98d8b29547b82ad31345e2f04d599c48c0e2d46cc55814a8273e9c06cad9d31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_murdock, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:27:54 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v639: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:27:54 np0005544708 rsyslogd[1006]: imjournal: 1738 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec  3 16:27:54 np0005544708 lvm[242601]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:27:54 np0005544708 lvm[242599]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:27:54 np0005544708 lvm[242601]: VG ceph_vg2 finished
Dec  3 16:27:54 np0005544708 lvm[242599]: VG ceph_vg0 finished
Dec  3 16:27:54 np0005544708 lvm[242600]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:27:54 np0005544708 lvm[242600]: VG ceph_vg1 finished
Dec  3 16:27:54 np0005544708 cranky_murdock[242520]: {}
Dec  3 16:27:54 np0005544708 podman[242503]: 2025-12-03 21:27:54.865914402 +0000 UTC m=+1.084184256 container died c98d8b29547b82ad31345e2f04d599c48c0e2d46cc55814a8273e9c06cad9d31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  3 16:27:54 np0005544708 systemd[1]: libpod-c98d8b29547b82ad31345e2f04d599c48c0e2d46cc55814a8273e9c06cad9d31.scope: Deactivated successfully.
Dec  3 16:27:54 np0005544708 systemd[1]: libpod-c98d8b29547b82ad31345e2f04d599c48c0e2d46cc55814a8273e9c06cad9d31.scope: Consumed 1.519s CPU time.
Dec  3 16:27:54 np0005544708 systemd[1]: var-lib-containers-storage-overlay-eff1dfaa65ef2601c3b820557f9c1f06a80c6a503e0711de9a44b1c23bb5e571-merged.mount: Deactivated successfully.
Dec  3 16:27:54 np0005544708 podman[242503]: 2025-12-03 21:27:54.922208061 +0000 UTC m=+1.140477955 container remove c98d8b29547b82ad31345e2f04d599c48c0e2d46cc55814a8273e9c06cad9d31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_murdock, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:27:54 np0005544708 systemd[1]: libpod-conmon-c98d8b29547b82ad31345e2f04d599c48c0e2d46cc55814a8273e9c06cad9d31.scope: Deactivated successfully.
Dec  3 16:27:54 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:27:54 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:27:54 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:27:54 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:27:55 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:27:55 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:27:56 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v640: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:27:56 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:27:58 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v641: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:27:59 np0005544708 podman[242643]: 2025-12-03 21:27:59.155044661 +0000 UTC m=+0.076948374 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:27:59 np0005544708 podman[242642]: 2025-12-03 21:27:59.155296577 +0000 UTC m=+0.085235035 container health_status 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:28:00 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v642: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:28:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:28:01 np0005544708 ceph-mon[75204]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 16:28:01 np0005544708 ceph-mon[75204]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3095 writes, 13K keys, 3095 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.01 MB/s#012Cumulative WAL: 3095 writes, 3095 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1284 writes, 5579 keys, 1284 commit groups, 1.0 writes per commit group, ingest: 5.74 MB, 0.01 MB/s#012Interval WAL: 1284 writes, 1284 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    101.0      0.10              0.04         6    0.017       0      0       0.0       0.0#012  L6      1/0    4.66 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.4    146.5    119.4      0.21              0.09         5    0.042     16K   2270       0.0       0.0#012 Sum      1/0    4.66 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.4     98.3    113.4      0.31              0.12        11    0.028     16K   2270       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.5    120.5    122.8      0.16              0.06         6    0.026     10K   1497       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    146.5    119.4      0.21              0.09         5    0.042     16K   2270       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    106.2      0.10              0.04         5    0.019       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.4      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.010, interval 0.004#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.03 GB write, 0.03 MB/s write, 0.03 GB read, 0.03 MB/s read, 0.3 seconds#012Interval compaction: 0.02 GB write, 0.03 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56170d6e38d0#2 capacity: 308.00 MB usage: 1.32 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 7.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(90,1.17 MB,0.378814%) FilterBlock(12,54.30 KB,0.0172157%) IndexBlock(12,106.89 KB,0.0338914%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  3 16:28:02 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v643: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:28:04 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v644: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:28:06 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v645: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:28:06 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:28:08 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v646: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:28:10 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v647: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:28:11 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:28:12 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v648: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:28:14 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v649: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:28:16 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v650: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:28:16 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:28:16 np0005544708 podman[242682]: 2025-12-03 21:28:16.345968416 +0000 UTC m=+0.096019923 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 16:28:16 np0005544708 nova_compute[241566]: 2025-12-03 21:28:16.554 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:28:16 np0005544708 nova_compute[241566]: 2025-12-03 21:28:16.555 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:28:16 np0005544708 nova_compute[241566]: 2025-12-03 21:28:16.556 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 16:28:16 np0005544708 nova_compute[241566]: 2025-12-03 21:28:16.556 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 16:28:16 np0005544708 nova_compute[241566]: 2025-12-03 21:28:16.618 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 16:28:16 np0005544708 nova_compute[241566]: 2025-12-03 21:28:16.619 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:28:16 np0005544708 nova_compute[241566]: 2025-12-03 21:28:16.619 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:28:16 np0005544708 nova_compute[241566]: 2025-12-03 21:28:16.620 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:28:16 np0005544708 nova_compute[241566]: 2025-12-03 21:28:16.620 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:28:16 np0005544708 nova_compute[241566]: 2025-12-03 21:28:16.620 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:28:16 np0005544708 nova_compute[241566]: 2025-12-03 21:28:16.620 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:28:16 np0005544708 nova_compute[241566]: 2025-12-03 21:28:16.621 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 16:28:16 np0005544708 nova_compute[241566]: 2025-12-03 21:28:16.621 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:28:16 np0005544708 nova_compute[241566]: 2025-12-03 21:28:16.649 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:28:16 np0005544708 nova_compute[241566]: 2025-12-03 21:28:16.650 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:28:16 np0005544708 nova_compute[241566]: 2025-12-03 21:28:16.650 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:28:16 np0005544708 nova_compute[241566]: 2025-12-03 21:28:16.650 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 16:28:16 np0005544708 nova_compute[241566]: 2025-12-03 21:28:16.651 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:28:17 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  3 16:28:17 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1296139348' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  3 16:28:17 np0005544708 nova_compute[241566]: 2025-12-03 21:28:17.192 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:28:17 np0005544708 nova_compute[241566]: 2025-12-03 21:28:17.375 241570 WARNING nova.virt.libvirt.driver [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 16:28:17 np0005544708 nova_compute[241566]: 2025-12-03 21:28:17.376 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5292MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 16:28:17 np0005544708 nova_compute[241566]: 2025-12-03 21:28:17.376 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:28:17 np0005544708 nova_compute[241566]: 2025-12-03 21:28:17.376 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:28:17 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Dec  3 16:28:17 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4130373903' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Dec  3 16:28:17 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14308 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec  3 16:28:17 np0005544708 ceph-mgr[75500]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  3 16:28:17 np0005544708 ceph-mgr[75500]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  3 16:28:18 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v651: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:28:19 np0005544708 nova_compute[241566]: 2025-12-03 21:28:19.627 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 16:28:19 np0005544708 nova_compute[241566]: 2025-12-03 21:28:19.627 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 16:28:19 np0005544708 nova_compute[241566]: 2025-12-03 21:28:19.654 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:28:20 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v652: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:28:20 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  3 16:28:20 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2724371676' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  3 16:28:20 np0005544708 nova_compute[241566]: 2025-12-03 21:28:20.256 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:28:20 np0005544708 nova_compute[241566]: 2025-12-03 21:28:20.264 241570 DEBUG nova.compute.provider_tree [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Inventory has not changed in ProviderTree for provider: 94aba67c-5c5e-45d0-83d1-33eb467c8775 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 16:28:20 np0005544708 nova_compute[241566]: 2025-12-03 21:28:20.279 241570 DEBUG nova.scheduler.client.report [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Inventory has not changed for provider 94aba67c-5c5e-45d0-83d1-33eb467c8775 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 16:28:20 np0005544708 nova_compute[241566]: 2025-12-03 21:28:20.281 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 16:28:20 np0005544708 nova_compute[241566]: 2025-12-03 21:28:20.281 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:28:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:28:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Optimize plan auto_2025-12-03_21:28:21
Dec  3 16:28:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 16:28:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] do_upmap
Dec  3 16:28:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'images', 'backups', 'vms', 'cephfs.cephfs.meta', 'volumes']
Dec  3 16:28:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] prepared 0/10 upmap changes
Dec  3 16:28:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:28:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:28:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:28:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:28:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:28:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:28:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 16:28:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 16:28:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:28:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:28:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:28:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:28:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:28:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:28:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:28:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:28:22 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v653: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:28:24 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v654: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:28:26 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v655: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:28:26 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:28:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 16:28:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:28:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 16:28:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:28:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:28:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:28:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:28:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:28:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:28:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:28:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:28:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:28:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.7344613959254126e-06 of space, bias 4.0, pg target 0.002081353675110495 quantized to 16 (current 16)
Dec  3 16:28:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:28:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:28:28 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v656: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:28:30 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v657: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:28:30 np0005544708 podman[242753]: 2025-12-03 21:28:30.110727727 +0000 UTC m=+0.049182869 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec  3 16:28:30 np0005544708 podman[242752]: 2025-12-03 21:28:30.157642623 +0000 UTC m=+0.097636676 container health_status 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  3 16:28:31 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:28:32 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v658: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:28:33 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Dec  3 16:28:33 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1939712706' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Dec  3 16:28:33 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14316 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec  3 16:28:33 np0005544708 ceph-mgr[75500]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  3 16:28:33 np0005544708 ceph-mgr[75500]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  3 16:28:34 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v659: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:28:36 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v660: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:28:36 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:28:38 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v661: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:28:40 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v662: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:28:41 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:28:42 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v663: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:28:44 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v664: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:28:46 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v665: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:28:46 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:28:47 np0005544708 podman[242789]: 2025-12-03 21:28:47.192766164 +0000 UTC m=+0.125235135 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  3 16:28:48 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v666: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:28:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:28:48.931 151937 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:28:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:28:48.932 151937 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:28:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:28:48.932 151937 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:28:50 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v667: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:28:51 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:28:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:28:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:28:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:28:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:28:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:28:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:28:52 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v668: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:28:54 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v669: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:28:55 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:28:55 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:28:55 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:28:55 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:28:56 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:28:56 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v670: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:28:56 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:28:56 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:28:56 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  3 16:28:56 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:28:56 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  3 16:28:56 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:28:56 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  3 16:28:56 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  3 16:28:56 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  3 16:28:56 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:28:56 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:28:56 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:28:56 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:28:56 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:28:56 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:28:56 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:28:56 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:28:57 np0005544708 podman[243027]: 2025-12-03 21:28:57.033659261 +0000 UTC m=+0.053819333 container create 0ba070095caf0c36343510392a9548b9f14b271bb56c227e0fa1f30ce1be5c1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:28:57 np0005544708 systemd[1]: Started libpod-conmon-0ba070095caf0c36343510392a9548b9f14b271bb56c227e0fa1f30ce1be5c1f.scope.
Dec  3 16:28:57 np0005544708 podman[243027]: 2025-12-03 21:28:57.009923775 +0000 UTC m=+0.030083927 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:28:57 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:28:57 np0005544708 podman[243027]: 2025-12-03 21:28:57.1221112 +0000 UTC m=+0.142271332 container init 0ba070095caf0c36343510392a9548b9f14b271bb56c227e0fa1f30ce1be5c1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:28:57 np0005544708 podman[243027]: 2025-12-03 21:28:57.128700216 +0000 UTC m=+0.148860288 container start 0ba070095caf0c36343510392a9548b9f14b271bb56c227e0fa1f30ce1be5c1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_darwin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:28:57 np0005544708 podman[243027]: 2025-12-03 21:28:57.131503421 +0000 UTC m=+0.151663523 container attach 0ba070095caf0c36343510392a9548b9f14b271bb56c227e0fa1f30ce1be5c1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_darwin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec  3 16:28:57 np0005544708 priceless_darwin[243043]: 167 167
Dec  3 16:28:57 np0005544708 systemd[1]: libpod-0ba070095caf0c36343510392a9548b9f14b271bb56c227e0fa1f30ce1be5c1f.scope: Deactivated successfully.
Dec  3 16:28:57 np0005544708 podman[243027]: 2025-12-03 21:28:57.137367518 +0000 UTC m=+0.157527610 container died 0ba070095caf0c36343510392a9548b9f14b271bb56c227e0fa1f30ce1be5c1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_darwin, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:28:57 np0005544708 systemd[1]: var-lib-containers-storage-overlay-f5f6358899f3133da8a5343efdcd5b64b22c5914666a4c0165208804cb718252-merged.mount: Deactivated successfully.
Dec  3 16:28:57 np0005544708 podman[243027]: 2025-12-03 21:28:57.183737601 +0000 UTC m=+0.203897703 container remove 0ba070095caf0c36343510392a9548b9f14b271bb56c227e0fa1f30ce1be5c1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:28:57 np0005544708 systemd[1]: libpod-conmon-0ba070095caf0c36343510392a9548b9f14b271bb56c227e0fa1f30ce1be5c1f.scope: Deactivated successfully.
Dec  3 16:28:57 np0005544708 podman[243067]: 2025-12-03 21:28:57.385857914 +0000 UTC m=+0.060552343 container create f4c179b384f6ddd0d30417087335825261bf5d8f76e486f5b129ef7409c1c83a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_jones, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 16:28:57 np0005544708 systemd[1]: Started libpod-conmon-f4c179b384f6ddd0d30417087335825261bf5d8f76e486f5b129ef7409c1c83a.scope.
Dec  3 16:28:57 np0005544708 podman[243067]: 2025-12-03 21:28:57.364348388 +0000 UTC m=+0.039042817 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:28:57 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:28:57 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3ff4585d0ecaf8af235aeeb9fc1b6aff6fa13966aca0beca0c0d90713a34a64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:28:57 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3ff4585d0ecaf8af235aeeb9fc1b6aff6fa13966aca0beca0c0d90713a34a64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:28:57 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3ff4585d0ecaf8af235aeeb9fc1b6aff6fa13966aca0beca0c0d90713a34a64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:28:57 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3ff4585d0ecaf8af235aeeb9fc1b6aff6fa13966aca0beca0c0d90713a34a64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:28:57 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3ff4585d0ecaf8af235aeeb9fc1b6aff6fa13966aca0beca0c0d90713a34a64/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:28:57 np0005544708 podman[243067]: 2025-12-03 21:28:57.501910712 +0000 UTC m=+0.176605151 container init f4c179b384f6ddd0d30417087335825261bf5d8f76e486f5b129ef7409c1c83a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:28:57 np0005544708 podman[243067]: 2025-12-03 21:28:57.518420114 +0000 UTC m=+0.193114513 container start f4c179b384f6ddd0d30417087335825261bf5d8f76e486f5b129ef7409c1c83a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_jones, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec  3 16:28:57 np0005544708 podman[243067]: 2025-12-03 21:28:57.521946389 +0000 UTC m=+0.196640828 container attach f4c179b384f6ddd0d30417087335825261bf5d8f76e486f5b129ef7409c1c83a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_jones, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  3 16:28:58 np0005544708 reverent_jones[243083]: --> passed data devices: 0 physical, 3 LVM
Dec  3 16:28:58 np0005544708 reverent_jones[243083]: --> All data devices are unavailable
Dec  3 16:28:58 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v671: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:28:58 np0005544708 systemd[1]: libpod-f4c179b384f6ddd0d30417087335825261bf5d8f76e486f5b129ef7409c1c83a.scope: Deactivated successfully.
Dec  3 16:28:58 np0005544708 podman[243067]: 2025-12-03 21:28:58.111856829 +0000 UTC m=+0.786551228 container died f4c179b384f6ddd0d30417087335825261bf5d8f76e486f5b129ef7409c1c83a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_jones, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec  3 16:28:58 np0005544708 systemd[1]: var-lib-containers-storage-overlay-b3ff4585d0ecaf8af235aeeb9fc1b6aff6fa13966aca0beca0c0d90713a34a64-merged.mount: Deactivated successfully.
Dec  3 16:28:58 np0005544708 podman[243067]: 2025-12-03 21:28:58.153868274 +0000 UTC m=+0.828562703 container remove f4c179b384f6ddd0d30417087335825261bf5d8f76e486f5b129ef7409c1c83a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_jones, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec  3 16:28:58 np0005544708 systemd[1]: libpod-conmon-f4c179b384f6ddd0d30417087335825261bf5d8f76e486f5b129ef7409c1c83a.scope: Deactivated successfully.
Dec  3 16:28:58 np0005544708 podman[243176]: 2025-12-03 21:28:58.734907247 +0000 UTC m=+0.051284605 container create f605237b63cd2c4c476ff530437580d1b767e2e66409a491957500e2fb0a1437 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_williams, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:28:58 np0005544708 systemd[1]: Started libpod-conmon-f605237b63cd2c4c476ff530437580d1b767e2e66409a491957500e2fb0a1437.scope.
Dec  3 16:28:58 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:28:58 np0005544708 podman[243176]: 2025-12-03 21:28:58.713784711 +0000 UTC m=+0.030162159 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:28:58 np0005544708 podman[243176]: 2025-12-03 21:28:58.813199184 +0000 UTC m=+0.129576572 container init f605237b63cd2c4c476ff530437580d1b767e2e66409a491957500e2fb0a1437 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle)
Dec  3 16:28:58 np0005544708 podman[243176]: 2025-12-03 21:28:58.819103012 +0000 UTC m=+0.135480410 container start f605237b63cd2c4c476ff530437580d1b767e2e66409a491957500e2fb0a1437 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_williams, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:28:58 np0005544708 podman[243176]: 2025-12-03 21:28:58.822704199 +0000 UTC m=+0.139081577 container attach f605237b63cd2c4c476ff530437580d1b767e2e66409a491957500e2fb0a1437 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec  3 16:28:58 np0005544708 reverent_williams[243193]: 167 167
Dec  3 16:28:58 np0005544708 systemd[1]: libpod-f605237b63cd2c4c476ff530437580d1b767e2e66409a491957500e2fb0a1437.scope: Deactivated successfully.
Dec  3 16:28:58 np0005544708 podman[243176]: 2025-12-03 21:28:58.824054834 +0000 UTC m=+0.140432202 container died f605237b63cd2c4c476ff530437580d1b767e2e66409a491957500e2fb0a1437 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_williams, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  3 16:28:58 np0005544708 systemd[1]: var-lib-containers-storage-overlay-e95bc63dd77a00cc5dd70f99ff5146ac94f111e9eb36c87415e9c768b1a19609-merged.mount: Deactivated successfully.
Dec  3 16:28:58 np0005544708 podman[243176]: 2025-12-03 21:28:58.865218827 +0000 UTC m=+0.181596215 container remove f605237b63cd2c4c476ff530437580d1b767e2e66409a491957500e2fb0a1437 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_williams, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec  3 16:28:58 np0005544708 systemd[1]: libpod-conmon-f605237b63cd2c4c476ff530437580d1b767e2e66409a491957500e2fb0a1437.scope: Deactivated successfully.
Dec  3 16:28:59 np0005544708 podman[243218]: 2025-12-03 21:28:59.07852688 +0000 UTC m=+0.065198277 container create c1be0ff08d1033a6ef8da9aa1db1c1ef507c4f15d04b3d5261b5c9cfd8812500 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_kirch, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec  3 16:28:59 np0005544708 systemd[1]: Started libpod-conmon-c1be0ff08d1033a6ef8da9aa1db1c1ef507c4f15d04b3d5261b5c9cfd8812500.scope.
Dec  3 16:28:59 np0005544708 podman[243218]: 2025-12-03 21:28:59.057395415 +0000 UTC m=+0.044066882 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:28:59 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:28:59 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad62fbc2ac4f6ee6f75eecdd2a1f844f5f86fffa303a9ec100b432355535089b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:28:59 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad62fbc2ac4f6ee6f75eecdd2a1f844f5f86fffa303a9ec100b432355535089b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:28:59 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad62fbc2ac4f6ee6f75eecdd2a1f844f5f86fffa303a9ec100b432355535089b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:28:59 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad62fbc2ac4f6ee6f75eecdd2a1f844f5f86fffa303a9ec100b432355535089b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:28:59 np0005544708 podman[243218]: 2025-12-03 21:28:59.192153484 +0000 UTC m=+0.178824921 container init c1be0ff08d1033a6ef8da9aa1db1c1ef507c4f15d04b3d5261b5c9cfd8812500 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec  3 16:28:59 np0005544708 podman[243218]: 2025-12-03 21:28:59.20616963 +0000 UTC m=+0.192840997 container start c1be0ff08d1033a6ef8da9aa1db1c1ef507c4f15d04b3d5261b5c9cfd8812500 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec  3 16:28:59 np0005544708 podman[243218]: 2025-12-03 21:28:59.20881754 +0000 UTC m=+0.195489017 container attach c1be0ff08d1033a6ef8da9aa1db1c1ef507c4f15d04b3d5261b5c9cfd8812500 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_kirch, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]: {
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:    "0": [
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:        {
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:            "devices": [
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "/dev/loop3"
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:            ],
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:            "lv_name": "ceph_lv0",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:            "lv_size": "21470642176",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d33bc95-baf8-481d-bc78-3b15ffd29872,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:            "lv_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:            "name": "ceph_lv0",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:            "tags": {
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.block_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.cluster_name": "ceph",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.crush_device_class": "",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.encrypted": "0",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.objectstore": "bluestore",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.osd_fsid": "4d33bc95-baf8-481d-bc78-3b15ffd29872",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.osd_id": "0",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.type": "block",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.vdo": "0",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.with_tpm": "0"
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:            },
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:            "type": "block",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:            "vg_name": "ceph_vg0"
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:        }
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:    ],
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:    "1": [
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:        {
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:            "devices": [
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "/dev/loop4"
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:            ],
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:            "lv_name": "ceph_lv1",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:            "lv_size": "21470642176",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c4086f1b-ff53-4e63-8dc0-011238d77976,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:            "lv_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:            "name": "ceph_lv1",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:            "tags": {
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.block_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.cluster_name": "ceph",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.crush_device_class": "",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.encrypted": "0",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.objectstore": "bluestore",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.osd_fsid": "c4086f1b-ff53-4e63-8dc0-011238d77976",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.osd_id": "1",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.type": "block",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.vdo": "0",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.with_tpm": "0"
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:            },
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:            "type": "block",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:            "vg_name": "ceph_vg1"
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:        }
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:    ],
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:    "2": [
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:        {
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:            "devices": [
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "/dev/loop5"
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:            ],
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:            "lv_name": "ceph_lv2",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:            "lv_size": "21470642176",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=abcd6a67-9013-4470-978f-f75da5f33cd4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:            "lv_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:            "name": "ceph_lv2",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:            "tags": {
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.block_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.cluster_name": "ceph",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.crush_device_class": "",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.encrypted": "0",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.objectstore": "bluestore",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.osd_fsid": "abcd6a67-9013-4470-978f-f75da5f33cd4",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.osd_id": "2",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.type": "block",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.vdo": "0",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:                "ceph.with_tpm": "0"
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:            },
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:            "type": "block",
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:            "vg_name": "ceph_vg2"
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:        }
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]:    ]
Dec  3 16:28:59 np0005544708 vibrant_kirch[243235]: }
Dec  3 16:28:59 np0005544708 systemd[1]: libpod-c1be0ff08d1033a6ef8da9aa1db1c1ef507c4f15d04b3d5261b5c9cfd8812500.scope: Deactivated successfully.
Dec  3 16:28:59 np0005544708 podman[243218]: 2025-12-03 21:28:59.583214908 +0000 UTC m=+0.569886275 container died c1be0ff08d1033a6ef8da9aa1db1c1ef507c4f15d04b3d5261b5c9cfd8812500 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_kirch, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:28:59 np0005544708 systemd[1]: var-lib-containers-storage-overlay-ad62fbc2ac4f6ee6f75eecdd2a1f844f5f86fffa303a9ec100b432355535089b-merged.mount: Deactivated successfully.
Dec  3 16:28:59 np0005544708 podman[243218]: 2025-12-03 21:28:59.620915368 +0000 UTC m=+0.607586735 container remove c1be0ff08d1033a6ef8da9aa1db1c1ef507c4f15d04b3d5261b5c9cfd8812500 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec  3 16:28:59 np0005544708 systemd[1]: libpod-conmon-c1be0ff08d1033a6ef8da9aa1db1c1ef507c4f15d04b3d5261b5c9cfd8812500.scope: Deactivated successfully.
Dec  3 16:28:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  3 16:28:59 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3654616200' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec  3 16:28:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  3 16:28:59 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3654616200' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec  3 16:29:00 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v672: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:29:00 np0005544708 podman[243316]: 2025-12-03 21:29:00.1825469 +0000 UTC m=+0.062120904 container create 63e56dfd2e059a894c93073544766848bc39d1edbaa3b580f99b2ee91e7403ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec  3 16:29:00 np0005544708 systemd[1]: Started libpod-conmon-63e56dfd2e059a894c93073544766848bc39d1edbaa3b580f99b2ee91e7403ca.scope.
Dec  3 16:29:00 np0005544708 podman[243316]: 2025-12-03 21:29:00.157625703 +0000 UTC m=+0.037199767 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:29:00 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:29:00 np0005544708 podman[243316]: 2025-12-03 21:29:00.287936323 +0000 UTC m=+0.167510407 container init 63e56dfd2e059a894c93073544766848bc39d1edbaa3b580f99b2ee91e7403ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_solomon, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec  3 16:29:00 np0005544708 podman[243316]: 2025-12-03 21:29:00.30238481 +0000 UTC m=+0.181958794 container start 63e56dfd2e059a894c93073544766848bc39d1edbaa3b580f99b2ee91e7403ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec  3 16:29:00 np0005544708 sweet_solomon[243335]: 167 167
Dec  3 16:29:00 np0005544708 podman[243316]: 2025-12-03 21:29:00.308163385 +0000 UTC m=+0.187737369 container attach 63e56dfd2e059a894c93073544766848bc39d1edbaa3b580f99b2ee91e7403ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_solomon, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:29:00 np0005544708 podman[243316]: 2025-12-03 21:29:00.319133369 +0000 UTC m=+0.198707383 container died 63e56dfd2e059a894c93073544766848bc39d1edbaa3b580f99b2ee91e7403ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_solomon, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec  3 16:29:00 np0005544708 systemd[1]: libpod-63e56dfd2e059a894c93073544766848bc39d1edbaa3b580f99b2ee91e7403ca.scope: Deactivated successfully.
Dec  3 16:29:00 np0005544708 podman[243334]: 2025-12-03 21:29:00.344836497 +0000 UTC m=+0.105148687 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 16:29:00 np0005544708 systemd[1]: var-lib-containers-storage-overlay-82ab0babd6da154e361790e9cdd08d422229631dd6566ca3a062d61f5d3d76bd-merged.mount: Deactivated successfully.
Dec  3 16:29:00 np0005544708 podman[243316]: 2025-12-03 21:29:00.370743342 +0000 UTC m=+0.250317316 container remove 63e56dfd2e059a894c93073544766848bc39d1edbaa3b580f99b2ee91e7403ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_solomon, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True)
Dec  3 16:29:00 np0005544708 podman[243330]: 2025-12-03 21:29:00.373544657 +0000 UTC m=+0.136309683 container health_status 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:29:00 np0005544708 systemd[1]: libpod-conmon-63e56dfd2e059a894c93073544766848bc39d1edbaa3b580f99b2ee91e7403ca.scope: Deactivated successfully.
Dec  3 16:29:00 np0005544708 podman[243389]: 2025-12-03 21:29:00.536439039 +0000 UTC m=+0.048203372 container create 59b74b5f317a83bea41f09abecaca54fff546bc046b5813933a2e3a72cc7487d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_darwin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 16:29:00 np0005544708 systemd[1]: Started libpod-conmon-59b74b5f317a83bea41f09abecaca54fff546bc046b5813933a2e3a72cc7487d.scope.
Dec  3 16:29:00 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:29:00 np0005544708 podman[243389]: 2025-12-03 21:29:00.516926757 +0000 UTC m=+0.028691110 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:29:00 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1944ae2c67b4d552ae65ec1b33c08d997756e58e4d568629eb8516c60448bb7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:29:00 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1944ae2c67b4d552ae65ec1b33c08d997756e58e4d568629eb8516c60448bb7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:29:00 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1944ae2c67b4d552ae65ec1b33c08d997756e58e4d568629eb8516c60448bb7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:29:00 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1944ae2c67b4d552ae65ec1b33c08d997756e58e4d568629eb8516c60448bb7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:29:00 np0005544708 podman[243389]: 2025-12-03 21:29:00.627166879 +0000 UTC m=+0.138931242 container init 59b74b5f317a83bea41f09abecaca54fff546bc046b5813933a2e3a72cc7487d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_darwin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  3 16:29:00 np0005544708 podman[243389]: 2025-12-03 21:29:00.639616982 +0000 UTC m=+0.151381325 container start 59b74b5f317a83bea41f09abecaca54fff546bc046b5813933a2e3a72cc7487d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:29:00 np0005544708 podman[243389]: 2025-12-03 21:29:00.643304882 +0000 UTC m=+0.155069215 container attach 59b74b5f317a83bea41f09abecaca54fff546bc046b5813933a2e3a72cc7487d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  3 16:29:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:29:01 np0005544708 lvm[243485]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:29:01 np0005544708 lvm[243485]: VG ceph_vg0 finished
Dec  3 16:29:01 np0005544708 lvm[243486]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:29:01 np0005544708 lvm[243486]: VG ceph_vg1 finished
Dec  3 16:29:01 np0005544708 lvm[243487]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:29:01 np0005544708 lvm[243487]: VG ceph_vg2 finished
Dec  3 16:29:01 np0005544708 flamboyant_darwin[243406]: {}
Dec  3 16:29:01 np0005544708 systemd[1]: libpod-59b74b5f317a83bea41f09abecaca54fff546bc046b5813933a2e3a72cc7487d.scope: Deactivated successfully.
Dec  3 16:29:01 np0005544708 systemd[1]: libpod-59b74b5f317a83bea41f09abecaca54fff546bc046b5813933a2e3a72cc7487d.scope: Consumed 1.360s CPU time.
Dec  3 16:29:01 np0005544708 podman[243389]: 2025-12-03 21:29:01.527740111 +0000 UTC m=+1.039504514 container died 59b74b5f317a83bea41f09abecaca54fff546bc046b5813933a2e3a72cc7487d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_darwin, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec  3 16:29:01 np0005544708 systemd[1]: var-lib-containers-storage-overlay-b1944ae2c67b4d552ae65ec1b33c08d997756e58e4d568629eb8516c60448bb7-merged.mount: Deactivated successfully.
Dec  3 16:29:01 np0005544708 podman[243389]: 2025-12-03 21:29:01.583916565 +0000 UTC m=+1.095680898 container remove 59b74b5f317a83bea41f09abecaca54fff546bc046b5813933a2e3a72cc7487d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_darwin, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:29:01 np0005544708 systemd[1]: libpod-conmon-59b74b5f317a83bea41f09abecaca54fff546bc046b5813933a2e3a72cc7487d.scope: Deactivated successfully.
Dec  3 16:29:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:29:01 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:29:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:29:01 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:29:01 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:29:01 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:29:02 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v673: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:29:04 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v674: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:29:06 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:29:06 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v675: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:29:08 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v676: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:29:09 np0005544708 ceph-osd[86059]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 16:29:09 np0005544708 ceph-osd[86059]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 4373 writes, 20K keys, 4373 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4373 writes, 451 syncs, 9.70 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561448c398d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561448c398d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, i
Dec  3 16:29:10 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v677: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:29:11 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:29:11 np0005544708 ceph-mon[75204]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Dec  3 16:29:11 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:29:11.100794) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 16:29:11 np0005544708 ceph-mon[75204]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Dec  3 16:29:11 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797351100827, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 2233, "num_deletes": 506, "total_data_size": 2250616, "memory_usage": 2298112, "flush_reason": "Manual Compaction"}
Dec  3 16:29:11 np0005544708 ceph-mon[75204]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Dec  3 16:29:11 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797351125465, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 2179430, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12103, "largest_seqno": 14335, "table_properties": {"data_size": 2169902, "index_size": 5510, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2949, "raw_key_size": 21666, "raw_average_key_size": 18, "raw_value_size": 2148773, "raw_average_value_size": 1853, "num_data_blocks": 253, "num_entries": 1159, "num_filter_entries": 1159, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764797135, "oldest_key_time": 1764797135, "file_creation_time": 1764797351, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d83d641b-0db7-44b5-9540-349f4c36f664", "db_session_id": "YRQHTOJ9E78VAMDNI6U1", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Dec  3 16:29:11 np0005544708 ceph-mon[75204]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 24799 microseconds, and 10039 cpu microseconds.
Dec  3 16:29:11 np0005544708 ceph-mon[75204]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 16:29:11 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:29:11.125550) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 2179430 bytes OK
Dec  3 16:29:11 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:29:11.125618) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Dec  3 16:29:11 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:29:11.126783) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Dec  3 16:29:11 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:29:11.126805) EVENT_LOG_v1 {"time_micros": 1764797351126798, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 16:29:11 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:29:11.126833) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 16:29:11 np0005544708 ceph-mon[75204]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 2240263, prev total WAL file size 2240263, number of live WAL files 2.
Dec  3 16:29:11 np0005544708 ceph-mon[75204]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 16:29:11 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:29:11.127846) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Dec  3 16:29:11 np0005544708 ceph-mon[75204]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 16:29:11 np0005544708 ceph-mon[75204]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(2128KB)], [32(4774KB)]
Dec  3 16:29:11 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797351127904, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 7068903, "oldest_snapshot_seqno": -1}
Dec  3 16:29:11 np0005544708 ceph-mon[75204]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3363 keys, 5627071 bytes, temperature: kUnknown
Dec  3 16:29:11 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797351181177, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 5627071, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5601893, "index_size": 15661, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8453, "raw_key_size": 79454, "raw_average_key_size": 23, "raw_value_size": 5538757, "raw_average_value_size": 1646, "num_data_blocks": 678, "num_entries": 3363, "num_filter_entries": 3363, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764796079, "oldest_key_time": 0, "file_creation_time": 1764797351, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d83d641b-0db7-44b5-9540-349f4c36f664", "db_session_id": "YRQHTOJ9E78VAMDNI6U1", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec  3 16:29:11 np0005544708 ceph-mon[75204]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 16:29:11 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:29:11.181485) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 5627071 bytes
Dec  3 16:29:11 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:29:11.183003) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 132.4 rd, 105.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 4.7 +0.0 blob) out(5.4 +0.0 blob), read-write-amplify(5.8) write-amplify(2.6) OK, records in: 4388, records dropped: 1025 output_compression: NoCompression
Dec  3 16:29:11 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:29:11.183039) EVENT_LOG_v1 {"time_micros": 1764797351183021, "job": 14, "event": "compaction_finished", "compaction_time_micros": 53376, "compaction_time_cpu_micros": 27004, "output_level": 6, "num_output_files": 1, "total_output_size": 5627071, "num_input_records": 4388, "num_output_records": 3363, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 16:29:11 np0005544708 ceph-mon[75204]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 16:29:11 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797351183934, "job": 14, "event": "table_file_deletion", "file_number": 34}
Dec  3 16:29:11 np0005544708 ceph-mon[75204]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 16:29:11 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797351185717, "job": 14, "event": "table_file_deletion", "file_number": 32}
Dec  3 16:29:11 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:29:11.127719) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:29:11 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:29:11.185796) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:29:11 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:29:11.185805) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:29:11 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:29:11.185809) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:29:11 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:29:11.185813) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:29:11 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:29:11.185817) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:29:12 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v678: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:29:14 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v679: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:29:14 np0005544708 ceph-osd[87094]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 16:29:14 np0005544708 ceph-osd[87094]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 4515 writes, 20K keys, 4515 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4515 writes, 505 syncs, 8.94 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cf1bed58d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cf1bed58d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, i
Dec  3 16:29:16 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v680: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:29:16 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:29:18 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v681: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:29:18 np0005544708 podman[243527]: 2025-12-03 21:29:18.243792893 +0000 UTC m=+0.168903585 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  3 16:29:19 np0005544708 ceph-osd[88129]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 16:29:19 np0005544708 ceph-osd[88129]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 4150 writes, 19K keys, 4150 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4150 writes, 366 syncs, 11.34 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559f074b98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559f074b98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, 
Dec  3 16:29:20 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v682: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:29:20 np0005544708 nova_compute[241566]: 2025-12-03 21:29:20.271 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:29:20 np0005544708 nova_compute[241566]: 2025-12-03 21:29:20.272 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:29:20 np0005544708 nova_compute[241566]: 2025-12-03 21:29:20.296 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:29:20 np0005544708 nova_compute[241566]: 2025-12-03 21:29:20.297 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 16:29:20 np0005544708 nova_compute[241566]: 2025-12-03 21:29:20.297 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 16:29:20 np0005544708 nova_compute[241566]: 2025-12-03 21:29:20.310 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 16:29:20 np0005544708 nova_compute[241566]: 2025-12-03 21:29:20.310 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:29:20 np0005544708 nova_compute[241566]: 2025-12-03 21:29:20.311 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:29:20 np0005544708 nova_compute[241566]: 2025-12-03 21:29:20.311 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:29:20 np0005544708 nova_compute[241566]: 2025-12-03 21:29:20.311 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:29:20 np0005544708 nova_compute[241566]: 2025-12-03 21:29:20.311 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:29:20 np0005544708 nova_compute[241566]: 2025-12-03 21:29:20.311 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:29:20 np0005544708 nova_compute[241566]: 2025-12-03 21:29:20.311 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 16:29:20 np0005544708 nova_compute[241566]: 2025-12-03 21:29:20.312 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:29:20 np0005544708 nova_compute[241566]: 2025-12-03 21:29:20.340 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:29:20 np0005544708 nova_compute[241566]: 2025-12-03 21:29:20.341 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:29:20 np0005544708 nova_compute[241566]: 2025-12-03 21:29:20.342 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:29:20 np0005544708 nova_compute[241566]: 2025-12-03 21:29:20.342 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 16:29:20 np0005544708 nova_compute[241566]: 2025-12-03 21:29:20.343 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:29:20 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  3 16:29:20 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3362728199' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  3 16:29:20 np0005544708 nova_compute[241566]: 2025-12-03 21:29:20.963 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:29:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:29:21 np0005544708 nova_compute[241566]: 2025-12-03 21:29:21.193 241570 WARNING nova.virt.libvirt.driver [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 16:29:21 np0005544708 nova_compute[241566]: 2025-12-03 21:29:21.194 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5285MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 16:29:21 np0005544708 nova_compute[241566]: 2025-12-03 21:29:21.195 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:29:21 np0005544708 nova_compute[241566]: 2025-12-03 21:29:21.195 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:29:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Optimize plan auto_2025-12-03_21:29:21
Dec  3 16:29:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 16:29:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] do_upmap
Dec  3 16:29:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] pools ['vms', 'images', 'volumes', '.mgr', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta']
Dec  3 16:29:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] prepared 0/10 upmap changes
Dec  3 16:29:21 np0005544708 ceph-mgr[75500]: [devicehealth INFO root] Check health
Dec  3 16:29:21 np0005544708 nova_compute[241566]: 2025-12-03 21:29:21.289 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 16:29:21 np0005544708 nova_compute[241566]: 2025-12-03 21:29:21.290 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 16:29:21 np0005544708 nova_compute[241566]: 2025-12-03 21:29:21.324 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:29:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:29:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:29:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:29:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:29:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:29:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:29:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  3 16:29:21 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2368212542' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  3 16:29:21 np0005544708 nova_compute[241566]: 2025-12-03 21:29:21.859 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:29:21 np0005544708 nova_compute[241566]: 2025-12-03 21:29:21.871 241570 DEBUG nova.compute.provider_tree [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Inventory has not changed in ProviderTree for provider: 94aba67c-5c5e-45d0-83d1-33eb467c8775 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 16:29:21 np0005544708 nova_compute[241566]: 2025-12-03 21:29:21.891 241570 DEBUG nova.scheduler.client.report [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Inventory has not changed for provider 94aba67c-5c5e-45d0-83d1-33eb467c8775 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 16:29:21 np0005544708 nova_compute[241566]: 2025-12-03 21:29:21.892 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 16:29:21 np0005544708 nova_compute[241566]: 2025-12-03 21:29:21.892 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:29:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 16:29:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:29:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 16:29:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:29:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:29:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:29:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:29:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:29:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:29:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:29:22 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v683: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:29:24 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:29:24.065 151937 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:b3:fa', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:85:3a:67:f5:74'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 16:29:24 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:29:24.068 151937 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 16:29:24 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:29:24.070 151937 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f27c01e7-5b62-4209-a664-3ae50b74644d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 16:29:24 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v684: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:29:26 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:29:26 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v685: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:29:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 16:29:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:29:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 16:29:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:29:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:29:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:29:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:29:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:29:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:29:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:29:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:29:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:29:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.7344613959254126e-06 of space, bias 4.0, pg target 0.002081353675110495 quantized to 16 (current 16)
Dec  3 16:29:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:29:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:29:28 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v686: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:29:30 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v687: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:29:31 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:29:31 np0005544708 podman[243598]: 2025-12-03 21:29:31.1588533 +0000 UTC m=+0.080983800 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 16:29:31 np0005544708 podman[243597]: 2025-12-03 21:29:31.167354358 +0000 UTC m=+0.092807167 container health_status 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:29:32 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v688: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:29:34 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v689: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:29:36 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:29:36 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v690: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:29:38 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v691: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:29:40 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v692: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:29:41 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:29:42 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v693: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:29:44 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v694: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:29:46 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:29:46 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v695: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:29:48 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v696: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:29:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:29:48.932 151937 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:29:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:29:48.933 151937 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:29:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:29:48.933 151937 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:29:49 np0005544708 podman[243635]: 2025-12-03 21:29:49.20255909 +0000 UTC m=+0.135929362 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  3 16:29:50 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v697: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:29:51 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:29:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:29:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:29:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:29:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:29:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:29:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:29:52 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v698: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:29:54 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v699: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:29:56 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:29:56 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v700: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:29:58 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v701: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:29:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  3 16:29:59 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2897846278' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec  3 16:29:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  3 16:29:59 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2897846278' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec  3 16:30:00 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v702: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:30:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:30:01 np0005544708 podman[243686]: 2025-12-03 21:30:01.877001492 +0000 UTC m=+0.060639304 container health_status 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  3 16:30:01 np0005544708 podman[243687]: 2025-12-03 21:30:01.876971402 +0000 UTC m=+0.056459103 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  3 16:30:02 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v703: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:30:02 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec  3 16:30:02 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec  3 16:30:02 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:30:02 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:30:02 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  3 16:30:02 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:30:02 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  3 16:30:02 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:30:02 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  3 16:30:02 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  3 16:30:02 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  3 16:30:02 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:30:02 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:30:02 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:30:03 np0005544708 podman[243844]: 2025-12-03 21:30:03.058429906 +0000 UTC m=+0.061553400 container create 707f37c1ef48842071d606f145bbeea02611403fc3b7fd74061e8dcb92c283fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_satoshi, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  3 16:30:03 np0005544708 systemd[1]: Started libpod-conmon-707f37c1ef48842071d606f145bbeea02611403fc3b7fd74061e8dcb92c283fa.scope.
Dec  3 16:30:03 np0005544708 podman[243844]: 2025-12-03 21:30:03.0268452 +0000 UTC m=+0.029968744 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:30:03 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:30:03 np0005544708 podman[243844]: 2025-12-03 21:30:03.172778229 +0000 UTC m=+0.175901743 container init 707f37c1ef48842071d606f145bbeea02611403fc3b7fd74061e8dcb92c283fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_satoshi, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:30:03 np0005544708 podman[243844]: 2025-12-03 21:30:03.185373256 +0000 UTC m=+0.188496720 container start 707f37c1ef48842071d606f145bbeea02611403fc3b7fd74061e8dcb92c283fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_satoshi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 16:30:03 np0005544708 podman[243844]: 2025-12-03 21:30:03.188466419 +0000 UTC m=+0.191589933 container attach 707f37c1ef48842071d606f145bbeea02611403fc3b7fd74061e8dcb92c283fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  3 16:30:03 np0005544708 magical_satoshi[243861]: 167 167
Dec  3 16:30:03 np0005544708 systemd[1]: libpod-707f37c1ef48842071d606f145bbeea02611403fc3b7fd74061e8dcb92c283fa.scope: Deactivated successfully.
Dec  3 16:30:03 np0005544708 conmon[243861]: conmon 707f37c1ef48842071d6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-707f37c1ef48842071d606f145bbeea02611403fc3b7fd74061e8dcb92c283fa.scope/container/memory.events
Dec  3 16:30:03 np0005544708 podman[243844]: 2025-12-03 21:30:03.196365 +0000 UTC m=+0.199488464 container died 707f37c1ef48842071d606f145bbeea02611403fc3b7fd74061e8dcb92c283fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_satoshi, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:30:03 np0005544708 systemd[1]: var-lib-containers-storage-overlay-b25f69a5565dbd5d9357856f032cb14ba98991c4e756430b52ad5ce4df631da5-merged.mount: Deactivated successfully.
Dec  3 16:30:03 np0005544708 podman[243844]: 2025-12-03 21:30:03.235322654 +0000 UTC m=+0.238446118 container remove 707f37c1ef48842071d606f145bbeea02611403fc3b7fd74061e8dcb92c283fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_satoshi, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec  3 16:30:03 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec  3 16:30:03 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:30:03 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:30:03 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:30:03 np0005544708 systemd[1]: libpod-conmon-707f37c1ef48842071d606f145bbeea02611403fc3b7fd74061e8dcb92c283fa.scope: Deactivated successfully.
Dec  3 16:30:03 np0005544708 podman[243884]: 2025-12-03 21:30:03.396012717 +0000 UTC m=+0.040389222 container create 4bbb5972c58771df1f77866be743d991028cf3f85773e5a39227ef4b8f85ddcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_nash, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:30:03 np0005544708 systemd[1]: Started libpod-conmon-4bbb5972c58771df1f77866be743d991028cf3f85773e5a39227ef4b8f85ddcf.scope.
Dec  3 16:30:03 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:30:03 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53e685c0bacb3d852b22f8783ecb06881ac8b0acd8b49013815569da6027b8f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:30:03 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53e685c0bacb3d852b22f8783ecb06881ac8b0acd8b49013815569da6027b8f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:30:03 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53e685c0bacb3d852b22f8783ecb06881ac8b0acd8b49013815569da6027b8f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:30:03 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53e685c0bacb3d852b22f8783ecb06881ac8b0acd8b49013815569da6027b8f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:30:03 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53e685c0bacb3d852b22f8783ecb06881ac8b0acd8b49013815569da6027b8f4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:30:03 np0005544708 podman[243884]: 2025-12-03 21:30:03.377828671 +0000 UTC m=+0.022205206 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:30:03 np0005544708 podman[243884]: 2025-12-03 21:30:03.477740327 +0000 UTC m=+0.122116822 container init 4bbb5972c58771df1f77866be743d991028cf3f85773e5a39227ef4b8f85ddcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_nash, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:30:03 np0005544708 podman[243884]: 2025-12-03 21:30:03.486915333 +0000 UTC m=+0.131291818 container start 4bbb5972c58771df1f77866be743d991028cf3f85773e5a39227ef4b8f85ddcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_nash, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True)
Dec  3 16:30:03 np0005544708 podman[243884]: 2025-12-03 21:30:03.489445971 +0000 UTC m=+0.133822476 container attach 4bbb5972c58771df1f77866be743d991028cf3f85773e5a39227ef4b8f85ddcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:30:03 np0005544708 laughing_nash[243900]: --> passed data devices: 0 physical, 3 LVM
Dec  3 16:30:03 np0005544708 laughing_nash[243900]: --> All data devices are unavailable
Dec  3 16:30:03 np0005544708 systemd[1]: libpod-4bbb5972c58771df1f77866be743d991028cf3f85773e5a39227ef4b8f85ddcf.scope: Deactivated successfully.
Dec  3 16:30:04 np0005544708 conmon[243900]: conmon 4bbb5972c58771df1f77 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4bbb5972c58771df1f77866be743d991028cf3f85773e5a39227ef4b8f85ddcf.scope/container/memory.events
Dec  3 16:30:04 np0005544708 podman[243884]: 2025-12-03 21:30:04.001917777 +0000 UTC m=+0.646294282 container died 4bbb5972c58771df1f77866be743d991028cf3f85773e5a39227ef4b8f85ddcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:30:04 np0005544708 systemd[1]: var-lib-containers-storage-overlay-53e685c0bacb3d852b22f8783ecb06881ac8b0acd8b49013815569da6027b8f4-merged.mount: Deactivated successfully.
Dec  3 16:30:04 np0005544708 podman[243884]: 2025-12-03 21:30:04.08865127 +0000 UTC m=+0.733027765 container remove 4bbb5972c58771df1f77866be743d991028cf3f85773e5a39227ef4b8f85ddcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_nash, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:30:04 np0005544708 systemd[1]: libpod-conmon-4bbb5972c58771df1f77866be743d991028cf3f85773e5a39227ef4b8f85ddcf.scope: Deactivated successfully.
Dec  3 16:30:04 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v704: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:30:04 np0005544708 podman[243995]: 2025-12-03 21:30:04.584224713 +0000 UTC m=+0.042393797 container create 0b94060d708efc7b28d50c60d9882ff5771b1ee00266d56f7e4b8a2677abd782 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_neumann, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:30:04 np0005544708 systemd[1]: Started libpod-conmon-0b94060d708efc7b28d50c60d9882ff5771b1ee00266d56f7e4b8a2677abd782.scope.
Dec  3 16:30:04 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:30:04 np0005544708 podman[243995]: 2025-12-03 21:30:04.565186743 +0000 UTC m=+0.023355867 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:30:04 np0005544708 podman[243995]: 2025-12-03 21:30:04.670316889 +0000 UTC m=+0.128485983 container init 0b94060d708efc7b28d50c60d9882ff5771b1ee00266d56f7e4b8a2677abd782 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec  3 16:30:04 np0005544708 podman[243995]: 2025-12-03 21:30:04.681396735 +0000 UTC m=+0.139565819 container start 0b94060d708efc7b28d50c60d9882ff5771b1ee00266d56f7e4b8a2677abd782 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:30:04 np0005544708 podman[243995]: 2025-12-03 21:30:04.684625262 +0000 UTC m=+0.142794356 container attach 0b94060d708efc7b28d50c60d9882ff5771b1ee00266d56f7e4b8a2677abd782 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_neumann, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:30:04 np0005544708 heuristic_neumann[244011]: 167 167
Dec  3 16:30:04 np0005544708 systemd[1]: libpod-0b94060d708efc7b28d50c60d9882ff5771b1ee00266d56f7e4b8a2677abd782.scope: Deactivated successfully.
Dec  3 16:30:04 np0005544708 podman[243995]: 2025-12-03 21:30:04.686316697 +0000 UTC m=+0.144485801 container died 0b94060d708efc7b28d50c60d9882ff5771b1ee00266d56f7e4b8a2677abd782 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:30:04 np0005544708 systemd[1]: var-lib-containers-storage-overlay-e0338eaaabe2d4a2b564097b8dc20c6c8f618a9d30d89c07da02485009b55a17-merged.mount: Deactivated successfully.
Dec  3 16:30:04 np0005544708 podman[243995]: 2025-12-03 21:30:04.736309267 +0000 UTC m=+0.194478371 container remove 0b94060d708efc7b28d50c60d9882ff5771b1ee00266d56f7e4b8a2677abd782 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec  3 16:30:04 np0005544708 systemd[1]: libpod-conmon-0b94060d708efc7b28d50c60d9882ff5771b1ee00266d56f7e4b8a2677abd782.scope: Deactivated successfully.
Dec  3 16:30:05 np0005544708 podman[244034]: 2025-12-03 21:30:04.946463245 +0000 UTC m=+0.025843053 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:30:05 np0005544708 podman[244034]: 2025-12-03 21:30:05.568943497 +0000 UTC m=+0.648323295 container create e385f06d816ce4dc28410596964e5e143f8af34dda4630723489ba51e0a6d86c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:30:05 np0005544708 systemd[1]: Started libpod-conmon-e385f06d816ce4dc28410596964e5e143f8af34dda4630723489ba51e0a6d86c.scope.
Dec  3 16:30:05 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:30:05 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a22191375e053be6c53ac5d1cbdb4a625ae9a77ac2dc566fa51501f9e31fd1c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:30:05 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a22191375e053be6c53ac5d1cbdb4a625ae9a77ac2dc566fa51501f9e31fd1c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:30:05 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a22191375e053be6c53ac5d1cbdb4a625ae9a77ac2dc566fa51501f9e31fd1c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:30:05 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a22191375e053be6c53ac5d1cbdb4a625ae9a77ac2dc566fa51501f9e31fd1c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:30:05 np0005544708 podman[244034]: 2025-12-03 21:30:05.678209294 +0000 UTC m=+0.757589102 container init e385f06d816ce4dc28410596964e5e143f8af34dda4630723489ba51e0a6d86c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_chebyshev, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:30:05 np0005544708 podman[244034]: 2025-12-03 21:30:05.698438646 +0000 UTC m=+0.777818424 container start e385f06d816ce4dc28410596964e5e143f8af34dda4630723489ba51e0a6d86c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_chebyshev, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec  3 16:30:05 np0005544708 podman[244034]: 2025-12-03 21:30:05.702648449 +0000 UTC m=+0.782028277 container attach e385f06d816ce4dc28410596964e5e143f8af34dda4630723489ba51e0a6d86c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]: {
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:    "0": [
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:        {
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:            "devices": [
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "/dev/loop3"
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:            ],
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:            "lv_name": "ceph_lv0",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:            "lv_size": "21470642176",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d33bc95-baf8-481d-bc78-3b15ffd29872,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:            "lv_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:            "name": "ceph_lv0",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:            "tags": {
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.block_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.cluster_name": "ceph",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.crush_device_class": "",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.encrypted": "0",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.objectstore": "bluestore",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.osd_fsid": "4d33bc95-baf8-481d-bc78-3b15ffd29872",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.osd_id": "0",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.type": "block",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.vdo": "0",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.with_tpm": "0"
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:            },
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:            "type": "block",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:            "vg_name": "ceph_vg0"
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:        }
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:    ],
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:    "1": [
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:        {
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:            "devices": [
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "/dev/loop4"
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:            ],
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:            "lv_name": "ceph_lv1",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:            "lv_size": "21470642176",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c4086f1b-ff53-4e63-8dc0-011238d77976,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:            "lv_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:            "name": "ceph_lv1",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:            "tags": {
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.block_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.cluster_name": "ceph",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.crush_device_class": "",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.encrypted": "0",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.objectstore": "bluestore",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.osd_fsid": "c4086f1b-ff53-4e63-8dc0-011238d77976",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.osd_id": "1",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.type": "block",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.vdo": "0",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.with_tpm": "0"
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:            },
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:            "type": "block",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:            "vg_name": "ceph_vg1"
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:        }
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:    ],
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:    "2": [
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:        {
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:            "devices": [
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "/dev/loop5"
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:            ],
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:            "lv_name": "ceph_lv2",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:            "lv_size": "21470642176",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=abcd6a67-9013-4470-978f-f75da5f33cd4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:            "lv_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:            "name": "ceph_lv2",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:            "tags": {
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.block_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.cluster_name": "ceph",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.crush_device_class": "",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.encrypted": "0",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.objectstore": "bluestore",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.osd_fsid": "abcd6a67-9013-4470-978f-f75da5f33cd4",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.osd_id": "2",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.type": "block",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.vdo": "0",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:                "ceph.with_tpm": "0"
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:            },
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:            "type": "block",
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:            "vg_name": "ceph_vg2"
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:        }
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]:    ]
Dec  3 16:30:06 np0005544708 agitated_chebyshev[244051]: }
Dec  3 16:30:06 np0005544708 systemd[1]: libpod-e385f06d816ce4dc28410596964e5e143f8af34dda4630723489ba51e0a6d86c.scope: Deactivated successfully.
Dec  3 16:30:06 np0005544708 podman[244034]: 2025-12-03 21:30:06.085085652 +0000 UTC m=+1.164465440 container died e385f06d816ce4dc28410596964e5e143f8af34dda4630723489ba51e0a6d86c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec  3 16:30:06 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:30:06 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v705: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:30:06 np0005544708 systemd[1]: var-lib-containers-storage-overlay-a22191375e053be6c53ac5d1cbdb4a625ae9a77ac2dc566fa51501f9e31fd1c2-merged.mount: Deactivated successfully.
Dec  3 16:30:06 np0005544708 podman[244034]: 2025-12-03 21:30:06.149081186 +0000 UTC m=+1.228460954 container remove e385f06d816ce4dc28410596964e5e143f8af34dda4630723489ba51e0a6d86c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 16:30:06 np0005544708 systemd[1]: libpod-conmon-e385f06d816ce4dc28410596964e5e143f8af34dda4630723489ba51e0a6d86c.scope: Deactivated successfully.
Dec  3 16:30:06 np0005544708 podman[244132]: 2025-12-03 21:30:06.736545641 +0000 UTC m=+0.082549623 container create 8599f88c3de5956601f3106a5bed47faa5e8265367ba87439643eb755524837d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_black, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:30:06 np0005544708 podman[244132]: 2025-12-03 21:30:06.681226069 +0000 UTC m=+0.027230141 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:30:06 np0005544708 systemd[1]: Started libpod-conmon-8599f88c3de5956601f3106a5bed47faa5e8265367ba87439643eb755524837d.scope.
Dec  3 16:30:06 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:30:06 np0005544708 podman[244132]: 2025-12-03 21:30:06.886865628 +0000 UTC m=+0.232869640 container init 8599f88c3de5956601f3106a5bed47faa5e8265367ba87439643eb755524837d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_black, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec  3 16:30:06 np0005544708 podman[244132]: 2025-12-03 21:30:06.892601071 +0000 UTC m=+0.238605043 container start 8599f88c3de5956601f3106a5bed47faa5e8265367ba87439643eb755524837d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_black, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec  3 16:30:06 np0005544708 podman[244132]: 2025-12-03 21:30:06.896283449 +0000 UTC m=+0.242287431 container attach 8599f88c3de5956601f3106a5bed47faa5e8265367ba87439643eb755524837d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_black, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:30:06 np0005544708 focused_black[244149]: 167 167
Dec  3 16:30:06 np0005544708 systemd[1]: libpod-8599f88c3de5956601f3106a5bed47faa5e8265367ba87439643eb755524837d.scope: Deactivated successfully.
Dec  3 16:30:06 np0005544708 podman[244132]: 2025-12-03 21:30:06.897789859 +0000 UTC m=+0.243793831 container died 8599f88c3de5956601f3106a5bed47faa5e8265367ba87439643eb755524837d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_black, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:30:06 np0005544708 systemd[1]: var-lib-containers-storage-overlay-32550418eea9e443147418caf4bb4e741b3cafd4ee0fa7d2cb6e2b63de22219b-merged.mount: Deactivated successfully.
Dec  3 16:30:06 np0005544708 podman[244132]: 2025-12-03 21:30:06.936889767 +0000 UTC m=+0.282893739 container remove 8599f88c3de5956601f3106a5bed47faa5e8265367ba87439643eb755524837d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_black, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:30:06 np0005544708 systemd[1]: libpod-conmon-8599f88c3de5956601f3106a5bed47faa5e8265367ba87439643eb755524837d.scope: Deactivated successfully.
Dec  3 16:30:07 np0005544708 podman[244173]: 2025-12-03 21:30:07.099260955 +0000 UTC m=+0.046365102 container create e35ecd12ea9449ebaa69606fa6df626fef7109ddc1183df9905abd836958ba08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:30:07 np0005544708 systemd[1]: Started libpod-conmon-e35ecd12ea9449ebaa69606fa6df626fef7109ddc1183df9905abd836958ba08.scope.
Dec  3 16:30:07 np0005544708 podman[244173]: 2025-12-03 21:30:07.076347292 +0000 UTC m=+0.023451489 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:30:07 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:30:07 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fe0d2d32986483ae8a4fd29a8fcc99a740cacab280de21422f3a4ed53c7bb74/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:30:07 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fe0d2d32986483ae8a4fd29a8fcc99a740cacab280de21422f3a4ed53c7bb74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:30:07 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fe0d2d32986483ae8a4fd29a8fcc99a740cacab280de21422f3a4ed53c7bb74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:30:07 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fe0d2d32986483ae8a4fd29a8fcc99a740cacab280de21422f3a4ed53c7bb74/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:30:07 np0005544708 podman[244173]: 2025-12-03 21:30:07.194291251 +0000 UTC m=+0.141395448 container init e35ecd12ea9449ebaa69606fa6df626fef7109ddc1183df9905abd836958ba08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_faraday, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:30:07 np0005544708 podman[244173]: 2025-12-03 21:30:07.209731325 +0000 UTC m=+0.156835462 container start e35ecd12ea9449ebaa69606fa6df626fef7109ddc1183df9905abd836958ba08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 16:30:07 np0005544708 podman[244173]: 2025-12-03 21:30:07.215190871 +0000 UTC m=+0.162295068 container attach e35ecd12ea9449ebaa69606fa6df626fef7109ddc1183df9905abd836958ba08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_faraday, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:30:07 np0005544708 lvm[244268]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:30:07 np0005544708 lvm[244268]: VG ceph_vg0 finished
Dec  3 16:30:07 np0005544708 lvm[244267]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:30:07 np0005544708 lvm[244267]: VG ceph_vg1 finished
Dec  3 16:30:07 np0005544708 lvm[244270]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:30:07 np0005544708 lvm[244270]: VG ceph_vg2 finished
Dec  3 16:30:08 np0005544708 funny_faraday[244189]: {}
Dec  3 16:30:08 np0005544708 systemd[1]: libpod-e35ecd12ea9449ebaa69606fa6df626fef7109ddc1183df9905abd836958ba08.scope: Deactivated successfully.
Dec  3 16:30:08 np0005544708 podman[244173]: 2025-12-03 21:30:08.04139333 +0000 UTC m=+0.988497477 container died e35ecd12ea9449ebaa69606fa6df626fef7109ddc1183df9905abd836958ba08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_faraday, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec  3 16:30:08 np0005544708 systemd[1]: libpod-e35ecd12ea9449ebaa69606fa6df626fef7109ddc1183df9905abd836958ba08.scope: Consumed 1.370s CPU time.
Dec  3 16:30:08 np0005544708 systemd[1]: var-lib-containers-storage-overlay-5fe0d2d32986483ae8a4fd29a8fcc99a740cacab280de21422f3a4ed53c7bb74-merged.mount: Deactivated successfully.
Dec  3 16:30:08 np0005544708 podman[244173]: 2025-12-03 21:30:08.097976086 +0000 UTC m=+1.045080243 container remove e35ecd12ea9449ebaa69606fa6df626fef7109ddc1183df9905abd836958ba08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_faraday, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec  3 16:30:08 np0005544708 systemd[1]: libpod-conmon-e35ecd12ea9449ebaa69606fa6df626fef7109ddc1183df9905abd836958ba08.scope: Deactivated successfully.
Dec  3 16:30:08 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v706: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:30:08 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:30:08 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:30:08 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:30:08 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:30:09 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:30:09 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:30:10 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v707: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:30:11 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:30:12 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v708: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:30:14 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v709: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:30:16 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:30:16 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v710: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:30:18 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v711: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:30:20 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v712: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:30:20 np0005544708 podman[244310]: 2025-12-03 21:30:20.174990815 +0000 UTC m=+0.113642425 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 16:30:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:30:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Optimize plan auto_2025-12-03_21:30:21
Dec  3 16:30:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 16:30:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] do_upmap
Dec  3 16:30:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] pools ['volumes', 'vms', 'cephfs.cephfs.meta', 'backups', '.mgr', 'cephfs.cephfs.data', 'images']
Dec  3 16:30:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] prepared 0/10 upmap changes
Dec  3 16:30:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:30:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:30:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:30:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:30:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:30:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:30:21 np0005544708 nova_compute[241566]: 2025-12-03 21:30:21.894 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:30:21 np0005544708 nova_compute[241566]: 2025-12-03 21:30:21.895 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:30:21 np0005544708 nova_compute[241566]: 2025-12-03 21:30:21.895 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 16:30:21 np0005544708 nova_compute[241566]: 2025-12-03 21:30:21.895 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 16:30:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 16:30:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:30:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 16:30:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:30:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:30:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:30:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:30:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:30:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:30:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:30:21 np0005544708 nova_compute[241566]: 2025-12-03 21:30:21.940 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 16:30:21 np0005544708 nova_compute[241566]: 2025-12-03 21:30:21.940 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:30:21 np0005544708 nova_compute[241566]: 2025-12-03 21:30:21.940 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:30:21 np0005544708 nova_compute[241566]: 2025-12-03 21:30:21.941 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:30:21 np0005544708 nova_compute[241566]: 2025-12-03 21:30:21.941 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:30:21 np0005544708 nova_compute[241566]: 2025-12-03 21:30:21.941 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:30:21 np0005544708 nova_compute[241566]: 2025-12-03 21:30:21.941 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:30:21 np0005544708 nova_compute[241566]: 2025-12-03 21:30:21.941 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 16:30:21 np0005544708 nova_compute[241566]: 2025-12-03 21:30:21.942 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:30:21 np0005544708 nova_compute[241566]: 2025-12-03 21:30:21.974 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:30:21 np0005544708 nova_compute[241566]: 2025-12-03 21:30:21.974 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:30:21 np0005544708 nova_compute[241566]: 2025-12-03 21:30:21.975 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:30:21 np0005544708 nova_compute[241566]: 2025-12-03 21:30:21.975 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 16:30:21 np0005544708 nova_compute[241566]: 2025-12-03 21:30:21.975 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:30:22 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v713: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:30:22 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  3 16:30:22 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2744860036' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  3 16:30:22 np0005544708 nova_compute[241566]: 2025-12-03 21:30:22.487 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:30:22 np0005544708 nova_compute[241566]: 2025-12-03 21:30:22.719 241570 WARNING nova.virt.libvirt.driver [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 16:30:22 np0005544708 nova_compute[241566]: 2025-12-03 21:30:22.721 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5299MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 16:30:22 np0005544708 nova_compute[241566]: 2025-12-03 21:30:22.721 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:30:22 np0005544708 nova_compute[241566]: 2025-12-03 21:30:22.722 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:30:22 np0005544708 nova_compute[241566]: 2025-12-03 21:30:22.794 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 16:30:22 np0005544708 nova_compute[241566]: 2025-12-03 21:30:22.795 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 16:30:22 np0005544708 nova_compute[241566]: 2025-12-03 21:30:22.820 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:30:23 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  3 16:30:23 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/116423800' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  3 16:30:23 np0005544708 nova_compute[241566]: 2025-12-03 21:30:23.380 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:30:23 np0005544708 nova_compute[241566]: 2025-12-03 21:30:23.386 241570 DEBUG nova.compute.provider_tree [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Inventory has not changed in ProviderTree for provider: 94aba67c-5c5e-45d0-83d1-33eb467c8775 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 16:30:23 np0005544708 nova_compute[241566]: 2025-12-03 21:30:23.415 241570 DEBUG nova.scheduler.client.report [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Inventory has not changed for provider 94aba67c-5c5e-45d0-83d1-33eb467c8775 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 16:30:23 np0005544708 nova_compute[241566]: 2025-12-03 21:30:23.417 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 16:30:23 np0005544708 nova_compute[241566]: 2025-12-03 21:30:23.417 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:30:24 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v714: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:30:26 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:30:26 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v715: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:30:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 16:30:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:30:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 16:30:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:30:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:30:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:30:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:30:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:30:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:30:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:30:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:30:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:30:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.7344613959254126e-06 of space, bias 4.0, pg target 0.002081353675110495 quantized to 16 (current 16)
Dec  3 16:30:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:30:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:30:28 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v716: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:30:30 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v717: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:30:31 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:30:32 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v718: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:30:32 np0005544708 podman[244382]: 2025-12-03 21:30:32.159540879 +0000 UTC m=+0.093125255 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  3 16:30:32 np0005544708 podman[244381]: 2025-12-03 21:30:32.167349689 +0000 UTC m=+0.107427509 container health_status 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  3 16:30:34 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v719: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:30:36 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:30:36 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v720: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:30:38 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v721: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:30:40 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v722: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:30:41 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:30:42 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v723: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:30:44 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v724: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:30:46 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:30:46 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v725: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:30:48 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v726: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:30:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:30:48.934 151937 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:30:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:30:48.935 151937 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:30:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:30:48.935 151937 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:30:49 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Dec  3 16:30:49 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Dec  3 16:30:49 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Dec  3 16:30:50 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v728: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:30:50 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Dec  3 16:30:50 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Dec  3 16:30:50 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Dec  3 16:30:50 np0005544708 podman[244432]: 2025-12-03 21:30:50.610313874 +0000 UTC m=+0.098892758 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec  3 16:30:51 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:30:51 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Dec  3 16:30:51 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Dec  3 16:30:51 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Dec  3 16:30:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:30:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:30:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:30:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:30:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:30:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:30:52 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v731: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 682 B/s wr, 5 op/s
Dec  3 16:30:54 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v732: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 682 B/s wr, 5 op/s
Dec  3 16:30:55 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Dec  3 16:30:55 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Dec  3 16:30:55 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Dec  3 16:30:56 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:30:56 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v734: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 6.8 MiB/s wr, 63 op/s
Dec  3 16:30:58 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v735: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 5.4 MiB/s wr, 50 op/s
Dec  3 16:30:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  3 16:30:59 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/553097494' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec  3 16:30:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  3 16:30:59 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/553097494' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec  3 16:31:00 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v736: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 4.8 MiB/s wr, 40 op/s
Dec  3 16:31:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:31:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Dec  3 16:31:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Dec  3 16:31:01 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Dec  3 16:31:02 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v738: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 5.1 MiB/s wr, 43 op/s
Dec  3 16:31:03 np0005544708 podman[244458]: 2025-12-03 21:31:03.139033423 +0000 UTC m=+0.077522827 container health_status 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 16:31:03 np0005544708 podman[244459]: 2025-12-03 21:31:03.172355236 +0000 UTC m=+0.095750996 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 16:31:04 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v739: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.5 MiB/s wr, 37 op/s
Dec  3 16:31:06 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:31:06 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v740: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:31:08 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v741: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:31:09 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:31:09 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:31:09 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  3 16:31:09 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:31:09 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  3 16:31:09 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:31:09 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  3 16:31:09 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  3 16:31:09 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:31:09 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  3 16:31:09 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:31:09 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:31:09 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:31:09 np0005544708 podman[244643]: 2025-12-03 21:31:09.758852664 +0000 UTC m=+0.068566078 container create 9219fdbf87b88b185b85db227db888abd1e88108843eff7d01b9379c0971fc8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec  3 16:31:09 np0005544708 systemd[1]: Started libpod-conmon-9219fdbf87b88b185b85db227db888abd1e88108843eff7d01b9379c0971fc8e.scope.
Dec  3 16:31:09 np0005544708 podman[244643]: 2025-12-03 21:31:09.7322148 +0000 UTC m=+0.041928294 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:31:09 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:31:09 np0005544708 podman[244643]: 2025-12-03 21:31:09.864304317 +0000 UTC m=+0.174017821 container init 9219fdbf87b88b185b85db227db888abd1e88108843eff7d01b9379c0971fc8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_jepsen, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:31:09 np0005544708 podman[244643]: 2025-12-03 21:31:09.880320516 +0000 UTC m=+0.190033930 container start 9219fdbf87b88b185b85db227db888abd1e88108843eff7d01b9379c0971fc8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_jepsen, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec  3 16:31:09 np0005544708 podman[244643]: 2025-12-03 21:31:09.884862778 +0000 UTC m=+0.194576212 container attach 9219fdbf87b88b185b85db227db888abd1e88108843eff7d01b9379c0971fc8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True)
Dec  3 16:31:09 np0005544708 modest_jepsen[244660]: 167 167
Dec  3 16:31:09 np0005544708 systemd[1]: libpod-9219fdbf87b88b185b85db227db888abd1e88108843eff7d01b9379c0971fc8e.scope: Deactivated successfully.
Dec  3 16:31:09 np0005544708 conmon[244660]: conmon 9219fdbf87b88b185b85 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9219fdbf87b88b185b85db227db888abd1e88108843eff7d01b9379c0971fc8e.scope/container/memory.events
Dec  3 16:31:09 np0005544708 podman[244643]: 2025-12-03 21:31:09.891068154 +0000 UTC m=+0.200781568 container died 9219fdbf87b88b185b85db227db888abd1e88108843eff7d01b9379c0971fc8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_jepsen, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:31:09 np0005544708 systemd[1]: var-lib-containers-storage-overlay-0e01889006dddf9c16c350ebf1ae83b9ea7274ed9c063266865401d5919dd8d9-merged.mount: Deactivated successfully.
Dec  3 16:31:09 np0005544708 podman[244643]: 2025-12-03 21:31:09.953995868 +0000 UTC m=+0.263709312 container remove 9219fdbf87b88b185b85db227db888abd1e88108843eff7d01b9379c0971fc8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:31:09 np0005544708 systemd[1]: libpod-conmon-9219fdbf87b88b185b85db227db888abd1e88108843eff7d01b9379c0971fc8e.scope: Deactivated successfully.
Dec  3 16:31:10 np0005544708 podman[244684]: 2025-12-03 21:31:10.141070838 +0000 UTC m=+0.037822643 container create 8dd3c8bed57dec60541e3c9d8a45344246835a48c6aba45ad0f428da98b84d80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shirley, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:31:10 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v742: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:31:10 np0005544708 systemd[1]: Started libpod-conmon-8dd3c8bed57dec60541e3c9d8a45344246835a48c6aba45ad0f428da98b84d80.scope.
Dec  3 16:31:10 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:31:10 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ec0ae966a4bd0db1743b76bc61541bbf8eaada5833bd50e4471e522b6924611/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:31:10 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ec0ae966a4bd0db1743b76bc61541bbf8eaada5833bd50e4471e522b6924611/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:31:10 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ec0ae966a4bd0db1743b76bc61541bbf8eaada5833bd50e4471e522b6924611/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:31:10 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ec0ae966a4bd0db1743b76bc61541bbf8eaada5833bd50e4471e522b6924611/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:31:10 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ec0ae966a4bd0db1743b76bc61541bbf8eaada5833bd50e4471e522b6924611/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:31:10 np0005544708 podman[244684]: 2025-12-03 21:31:10.215163983 +0000 UTC m=+0.111915788 container init 8dd3c8bed57dec60541e3c9d8a45344246835a48c6aba45ad0f428da98b84d80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shirley, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:31:10 np0005544708 podman[244684]: 2025-12-03 21:31:10.124402432 +0000 UTC m=+0.021154237 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:31:10 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:31:10 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:31:10 np0005544708 podman[244684]: 2025-12-03 21:31:10.226690862 +0000 UTC m=+0.123442657 container start 8dd3c8bed57dec60541e3c9d8a45344246835a48c6aba45ad0f428da98b84d80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shirley, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:31:10 np0005544708 podman[244684]: 2025-12-03 21:31:10.230523134 +0000 UTC m=+0.127274919 container attach 8dd3c8bed57dec60541e3c9d8a45344246835a48c6aba45ad0f428da98b84d80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shirley, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:31:10 np0005544708 zen_shirley[244700]: --> passed data devices: 0 physical, 3 LVM
Dec  3 16:31:10 np0005544708 zen_shirley[244700]: --> All data devices are unavailable
Dec  3 16:31:10 np0005544708 systemd[1]: libpod-8dd3c8bed57dec60541e3c9d8a45344246835a48c6aba45ad0f428da98b84d80.scope: Deactivated successfully.
Dec  3 16:31:10 np0005544708 podman[244684]: 2025-12-03 21:31:10.778389965 +0000 UTC m=+0.675141760 container died 8dd3c8bed57dec60541e3c9d8a45344246835a48c6aba45ad0f428da98b84d80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shirley, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  3 16:31:10 np0005544708 systemd[1]: var-lib-containers-storage-overlay-7ec0ae966a4bd0db1743b76bc61541bbf8eaada5833bd50e4471e522b6924611-merged.mount: Deactivated successfully.
Dec  3 16:31:10 np0005544708 podman[244684]: 2025-12-03 21:31:10.84876761 +0000 UTC m=+0.745519445 container remove 8dd3c8bed57dec60541e3c9d8a45344246835a48c6aba45ad0f428da98b84d80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shirley, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:31:10 np0005544708 systemd[1]: libpod-conmon-8dd3c8bed57dec60541e3c9d8a45344246835a48c6aba45ad0f428da98b84d80.scope: Deactivated successfully.
Dec  3 16:31:11 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:31:11 np0005544708 podman[244794]: 2025-12-03 21:31:11.361138211 +0000 UTC m=+0.055000584 container create e16dbd4904e958ee9eba5c69dc9be12043440e6ca67fbe817ea8c9bb8dce296e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_colden, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 16:31:11 np0005544708 systemd[1]: Started libpod-conmon-e16dbd4904e958ee9eba5c69dc9be12043440e6ca67fbe817ea8c9bb8dce296e.scope.
Dec  3 16:31:11 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:31:11 np0005544708 podman[244794]: 2025-12-03 21:31:11.339521422 +0000 UTC m=+0.033383805 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:31:11 np0005544708 podman[244794]: 2025-12-03 21:31:11.44887984 +0000 UTC m=+0.142742293 container init e16dbd4904e958ee9eba5c69dc9be12043440e6ca67fbe817ea8c9bb8dce296e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_colden, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:31:11 np0005544708 podman[244794]: 2025-12-03 21:31:11.462032702 +0000 UTC m=+0.155895105 container start e16dbd4904e958ee9eba5c69dc9be12043440e6ca67fbe817ea8c9bb8dce296e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec  3 16:31:11 np0005544708 podman[244794]: 2025-12-03 21:31:11.466513692 +0000 UTC m=+0.160376095 container attach e16dbd4904e958ee9eba5c69dc9be12043440e6ca67fbe817ea8c9bb8dce296e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_colden, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:31:11 np0005544708 goofy_colden[244810]: 167 167
Dec  3 16:31:11 np0005544708 systemd[1]: libpod-e16dbd4904e958ee9eba5c69dc9be12043440e6ca67fbe817ea8c9bb8dce296e.scope: Deactivated successfully.
Dec  3 16:31:11 np0005544708 podman[244794]: 2025-12-03 21:31:11.468978478 +0000 UTC m=+0.162840881 container died e16dbd4904e958ee9eba5c69dc9be12043440e6ca67fbe817ea8c9bb8dce296e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_colden, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:31:11 np0005544708 systemd[1]: var-lib-containers-storage-overlay-20b3608875c643d0cffc7ed6a7395efad972ecd482128762451ffad39de5efc3-merged.mount: Deactivated successfully.
Dec  3 16:31:11 np0005544708 podman[244794]: 2025-12-03 21:31:11.522783459 +0000 UTC m=+0.216645832 container remove e16dbd4904e958ee9eba5c69dc9be12043440e6ca67fbe817ea8c9bb8dce296e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:31:11 np0005544708 systemd[1]: libpod-conmon-e16dbd4904e958ee9eba5c69dc9be12043440e6ca67fbe817ea8c9bb8dce296e.scope: Deactivated successfully.
Dec  3 16:31:11 np0005544708 podman[244834]: 2025-12-03 21:31:11.76443484 +0000 UTC m=+0.064728434 container create dbbb187861340b4569459a547582f2076869cb837e0f38f19ff2b61146cf83e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mestorf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  3 16:31:11 np0005544708 systemd[1]: Started libpod-conmon-dbbb187861340b4569459a547582f2076869cb837e0f38f19ff2b61146cf83e4.scope.
Dec  3 16:31:11 np0005544708 podman[244834]: 2025-12-03 21:31:11.741707072 +0000 UTC m=+0.042000676 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:31:11 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:31:11 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4bf0a6b47274cc1fc7a96b4970f99aa5ebae36e27797274a92dedc5d66128de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:31:11 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4bf0a6b47274cc1fc7a96b4970f99aa5ebae36e27797274a92dedc5d66128de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:31:11 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4bf0a6b47274cc1fc7a96b4970f99aa5ebae36e27797274a92dedc5d66128de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:31:11 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4bf0a6b47274cc1fc7a96b4970f99aa5ebae36e27797274a92dedc5d66128de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:31:11 np0005544708 podman[244834]: 2025-12-03 21:31:11.89777065 +0000 UTC m=+0.198064294 container init dbbb187861340b4569459a547582f2076869cb837e0f38f19ff2b61146cf83e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:31:11 np0005544708 podman[244834]: 2025-12-03 21:31:11.909537925 +0000 UTC m=+0.209831519 container start dbbb187861340b4569459a547582f2076869cb837e0f38f19ff2b61146cf83e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec  3 16:31:11 np0005544708 podman[244834]: 2025-12-03 21:31:11.914012885 +0000 UTC m=+0.214306479 container attach dbbb187861340b4569459a547582f2076869cb837e0f38f19ff2b61146cf83e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:31:11 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:31:11.954 151937 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:b3:fa', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:85:3a:67:f5:74'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 16:31:11 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:31:11.960 151937 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 16:31:11 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:31:11.962 151937 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f27c01e7-5b62-4209-a664-3ae50b74644d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 16:31:12 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v743: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]: {
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:    "0": [
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:        {
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:            "devices": [
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "/dev/loop3"
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:            ],
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:            "lv_name": "ceph_lv0",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:            "lv_size": "21470642176",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d33bc95-baf8-481d-bc78-3b15ffd29872,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:            "lv_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:            "name": "ceph_lv0",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:            "tags": {
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.block_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.cluster_name": "ceph",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.crush_device_class": "",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.encrypted": "0",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.objectstore": "bluestore",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.osd_fsid": "4d33bc95-baf8-481d-bc78-3b15ffd29872",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.osd_id": "0",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.type": "block",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.vdo": "0",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.with_tpm": "0"
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:            },
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:            "type": "block",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:            "vg_name": "ceph_vg0"
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:        }
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:    ],
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:    "1": [
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:        {
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:            "devices": [
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "/dev/loop4"
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:            ],
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:            "lv_name": "ceph_lv1",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:            "lv_size": "21470642176",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c4086f1b-ff53-4e63-8dc0-011238d77976,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:            "lv_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:            "name": "ceph_lv1",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:            "tags": {
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.block_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.cluster_name": "ceph",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.crush_device_class": "",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.encrypted": "0",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.objectstore": "bluestore",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.osd_fsid": "c4086f1b-ff53-4e63-8dc0-011238d77976",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.osd_id": "1",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.type": "block",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.vdo": "0",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.with_tpm": "0"
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:            },
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:            "type": "block",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:            "vg_name": "ceph_vg1"
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:        }
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:    ],
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:    "2": [
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:        {
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:            "devices": [
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "/dev/loop5"
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:            ],
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:            "lv_name": "ceph_lv2",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:            "lv_size": "21470642176",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=abcd6a67-9013-4470-978f-f75da5f33cd4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:            "lv_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:            "name": "ceph_lv2",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:            "tags": {
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.block_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.cluster_name": "ceph",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.crush_device_class": "",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.encrypted": "0",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.objectstore": "bluestore",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.osd_fsid": "abcd6a67-9013-4470-978f-f75da5f33cd4",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.osd_id": "2",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.type": "block",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.vdo": "0",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:                "ceph.with_tpm": "0"
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:            },
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:            "type": "block",
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:            "vg_name": "ceph_vg2"
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:        }
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]:    ]
Dec  3 16:31:12 np0005544708 silly_mestorf[244850]: }
Dec  3 16:31:12 np0005544708 systemd[1]: libpod-dbbb187861340b4569459a547582f2076869cb837e0f38f19ff2b61146cf83e4.scope: Deactivated successfully.
Dec  3 16:31:12 np0005544708 podman[244834]: 2025-12-03 21:31:12.307000109 +0000 UTC m=+0.607293693 container died dbbb187861340b4569459a547582f2076869cb837e0f38f19ff2b61146cf83e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mestorf, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  3 16:31:12 np0005544708 systemd[1]: var-lib-containers-storage-overlay-f4bf0a6b47274cc1fc7a96b4970f99aa5ebae36e27797274a92dedc5d66128de-merged.mount: Deactivated successfully.
Dec  3 16:31:12 np0005544708 podman[244834]: 2025-12-03 21:31:12.358293763 +0000 UTC m=+0.658587317 container remove dbbb187861340b4569459a547582f2076869cb837e0f38f19ff2b61146cf83e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:31:12 np0005544708 systemd[1]: libpod-conmon-dbbb187861340b4569459a547582f2076869cb837e0f38f19ff2b61146cf83e4.scope: Deactivated successfully.
Dec  3 16:31:12 np0005544708 podman[244932]: 2025-12-03 21:31:12.878830211 +0000 UTC m=+0.045414297 container create 2716c2d70490038bcb37ec0050be3e695949ab26cb5576cddbdbd4f6606af3fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bell, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:31:12 np0005544708 systemd[1]: Started libpod-conmon-2716c2d70490038bcb37ec0050be3e695949ab26cb5576cddbdbd4f6606af3fa.scope.
Dec  3 16:31:12 np0005544708 podman[244932]: 2025-12-03 21:31:12.858190929 +0000 UTC m=+0.024775005 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:31:12 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:31:13 np0005544708 podman[244932]: 2025-12-03 21:31:13.002376401 +0000 UTC m=+0.168960527 container init 2716c2d70490038bcb37ec0050be3e695949ab26cb5576cddbdbd4f6606af3fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bell, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 16:31:13 np0005544708 podman[244932]: 2025-12-03 21:31:13.012021858 +0000 UTC m=+0.178605944 container start 2716c2d70490038bcb37ec0050be3e695949ab26cb5576cddbdbd4f6606af3fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:31:13 np0005544708 podman[244932]: 2025-12-03 21:31:13.015870871 +0000 UTC m=+0.182454967 container attach 2716c2d70490038bcb37ec0050be3e695949ab26cb5576cddbdbd4f6606af3fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  3 16:31:13 np0005544708 peaceful_bell[244948]: 167 167
Dec  3 16:31:13 np0005544708 systemd[1]: libpod-2716c2d70490038bcb37ec0050be3e695949ab26cb5576cddbdbd4f6606af3fa.scope: Deactivated successfully.
Dec  3 16:31:13 np0005544708 podman[244932]: 2025-12-03 21:31:13.022758656 +0000 UTC m=+0.189342752 container died 2716c2d70490038bcb37ec0050be3e695949ab26cb5576cddbdbd4f6606af3fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bell, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  3 16:31:13 np0005544708 systemd[1]: var-lib-containers-storage-overlay-042257b7c2536843195381ea1b5fac8d97d0cc72aaaff3303ace091a2681f908-merged.mount: Deactivated successfully.
Dec  3 16:31:13 np0005544708 podman[244932]: 2025-12-03 21:31:13.073594147 +0000 UTC m=+0.240178213 container remove 2716c2d70490038bcb37ec0050be3e695949ab26cb5576cddbdbd4f6606af3fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bell, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec  3 16:31:13 np0005544708 systemd[1]: libpod-conmon-2716c2d70490038bcb37ec0050be3e695949ab26cb5576cddbdbd4f6606af3fa.scope: Deactivated successfully.
Dec  3 16:31:13 np0005544708 podman[244973]: 2025-12-03 21:31:13.280049076 +0000 UTC m=+0.065342411 container create d4bd0c6b5396cc1debbad154446dcc876740b7fe29a1901b61329b5c95432dfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_merkle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:31:13 np0005544708 systemd[1]: Started libpod-conmon-d4bd0c6b5396cc1debbad154446dcc876740b7fe29a1901b61329b5c95432dfc.scope.
Dec  3 16:31:13 np0005544708 podman[244973]: 2025-12-03 21:31:13.252695603 +0000 UTC m=+0.037988978 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:31:13 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:31:13 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5022fc51579297474b28464f3d1f314812da28be5a57aacc61b91bacf30d180e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:31:13 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5022fc51579297474b28464f3d1f314812da28be5a57aacc61b91bacf30d180e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:31:13 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5022fc51579297474b28464f3d1f314812da28be5a57aacc61b91bacf30d180e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:31:13 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5022fc51579297474b28464f3d1f314812da28be5a57aacc61b91bacf30d180e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:31:13 np0005544708 podman[244973]: 2025-12-03 21:31:13.376676053 +0000 UTC m=+0.161969428 container init d4bd0c6b5396cc1debbad154446dcc876740b7fe29a1901b61329b5c95432dfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_merkle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:31:13 np0005544708 podman[244973]: 2025-12-03 21:31:13.392702743 +0000 UTC m=+0.177996068 container start d4bd0c6b5396cc1debbad154446dcc876740b7fe29a1901b61329b5c95432dfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_merkle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:31:13 np0005544708 podman[244973]: 2025-12-03 21:31:13.397018138 +0000 UTC m=+0.182311463 container attach d4bd0c6b5396cc1debbad154446dcc876740b7fe29a1901b61329b5c95432dfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_merkle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:31:14 np0005544708 lvm[245067]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:31:14 np0005544708 lvm[245067]: VG ceph_vg0 finished
Dec  3 16:31:14 np0005544708 lvm[245068]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:31:14 np0005544708 lvm[245068]: VG ceph_vg1 finished
Dec  3 16:31:14 np0005544708 lvm[245070]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:31:14 np0005544708 lvm[245070]: VG ceph_vg2 finished
Dec  3 16:31:14 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v744: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:31:14 np0005544708 compassionate_merkle[244989]: {}
Dec  3 16:31:14 np0005544708 systemd[1]: libpod-d4bd0c6b5396cc1debbad154446dcc876740b7fe29a1901b61329b5c95432dfc.scope: Deactivated successfully.
Dec  3 16:31:14 np0005544708 podman[244973]: 2025-12-03 21:31:14.230109447 +0000 UTC m=+1.015402782 container died d4bd0c6b5396cc1debbad154446dcc876740b7fe29a1901b61329b5c95432dfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_merkle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:31:14 np0005544708 systemd[1]: libpod-d4bd0c6b5396cc1debbad154446dcc876740b7fe29a1901b61329b5c95432dfc.scope: Consumed 1.401s CPU time.
Dec  3 16:31:14 np0005544708 systemd[1]: var-lib-containers-storage-overlay-5022fc51579297474b28464f3d1f314812da28be5a57aacc61b91bacf30d180e-merged.mount: Deactivated successfully.
Dec  3 16:31:14 np0005544708 podman[244973]: 2025-12-03 21:31:14.339004894 +0000 UTC m=+1.124298189 container remove d4bd0c6b5396cc1debbad154446dcc876740b7fe29a1901b61329b5c95432dfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec  3 16:31:14 np0005544708 systemd[1]: libpod-conmon-d4bd0c6b5396cc1debbad154446dcc876740b7fe29a1901b61329b5c95432dfc.scope: Deactivated successfully.
Dec  3 16:31:14 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:31:14 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:31:14 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:31:14 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:31:15 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:31:15 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:31:16 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:31:16 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v745: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:31:18 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v746: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:31:20 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v747: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:31:21 np0005544708 nova_compute[241566]: 2025-12-03 21:31:21.068 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:31:21 np0005544708 nova_compute[241566]: 2025-12-03 21:31:21.069 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:31:21 np0005544708 nova_compute[241566]: 2025-12-03 21:31:21.100 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:31:21 np0005544708 nova_compute[241566]: 2025-12-03 21:31:21.100 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 16:31:21 np0005544708 nova_compute[241566]: 2025-12-03 21:31:21.101 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 16:31:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:31:21 np0005544708 nova_compute[241566]: 2025-12-03 21:31:21.170 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 16:31:21 np0005544708 nova_compute[241566]: 2025-12-03 21:31:21.170 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:31:21 np0005544708 nova_compute[241566]: 2025-12-03 21:31:21.171 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:31:21 np0005544708 nova_compute[241566]: 2025-12-03 21:31:21.171 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:31:21 np0005544708 nova_compute[241566]: 2025-12-03 21:31:21.172 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:31:21 np0005544708 nova_compute[241566]: 2025-12-03 21:31:21.172 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:31:21 np0005544708 nova_compute[241566]: 2025-12-03 21:31:21.172 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 16:31:21 np0005544708 podman[245112]: 2025-12-03 21:31:21.21119157 +0000 UTC m=+0.138037007 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Dec  3 16:31:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Optimize plan auto_2025-12-03_21:31:21
Dec  3 16:31:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 16:31:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] do_upmap
Dec  3 16:31:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', '.mgr', 'vms', 'volumes', 'cephfs.cephfs.meta', 'backups']
Dec  3 16:31:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] prepared 0/10 upmap changes
Dec  3 16:31:21 np0005544708 nova_compute[241566]: 2025-12-03 21:31:21.551 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:31:21 np0005544708 nova_compute[241566]: 2025-12-03 21:31:21.551 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:31:21 np0005544708 nova_compute[241566]: 2025-12-03 21:31:21.587 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:31:21 np0005544708 nova_compute[241566]: 2025-12-03 21:31:21.588 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:31:21 np0005544708 nova_compute[241566]: 2025-12-03 21:31:21.588 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:31:21 np0005544708 nova_compute[241566]: 2025-12-03 21:31:21.589 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 16:31:21 np0005544708 nova_compute[241566]: 2025-12-03 21:31:21.589 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:31:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:31:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:31:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:31:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:31:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:31:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:31:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 16:31:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:31:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 16:31:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:31:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:31:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:31:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:31:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:31:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:31:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:31:22 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  3 16:31:22 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2042915418' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  3 16:31:22 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v748: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:31:22 np0005544708 nova_compute[241566]: 2025-12-03 21:31:22.161 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:31:22 np0005544708 nova_compute[241566]: 2025-12-03 21:31:22.371 241570 WARNING nova.virt.libvirt.driver [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 16:31:22 np0005544708 nova_compute[241566]: 2025-12-03 21:31:22.373 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5289MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 16:31:22 np0005544708 nova_compute[241566]: 2025-12-03 21:31:22.373 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:31:22 np0005544708 nova_compute[241566]: 2025-12-03 21:31:22.374 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:31:22 np0005544708 nova_compute[241566]: 2025-12-03 21:31:22.466 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 16:31:22 np0005544708 nova_compute[241566]: 2025-12-03 21:31:22.466 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 16:31:22 np0005544708 nova_compute[241566]: 2025-12-03 21:31:22.483 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:31:22 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  3 16:31:22 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1619567213' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  3 16:31:23 np0005544708 nova_compute[241566]: 2025-12-03 21:31:22.999 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:31:23 np0005544708 nova_compute[241566]: 2025-12-03 21:31:23.005 241570 DEBUG nova.compute.provider_tree [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Inventory has not changed in ProviderTree for provider: 94aba67c-5c5e-45d0-83d1-33eb467c8775 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 16:31:23 np0005544708 nova_compute[241566]: 2025-12-03 21:31:23.067 241570 DEBUG nova.scheduler.client.report [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Inventory has not changed for provider 94aba67c-5c5e-45d0-83d1-33eb467c8775 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 16:31:23 np0005544708 nova_compute[241566]: 2025-12-03 21:31:23.069 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 16:31:23 np0005544708 nova_compute[241566]: 2025-12-03 21:31:23.070 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:31:24 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v749: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:31:26 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:31:26 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v750: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:31:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 16:31:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:31:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 16:31:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:31:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:31:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:31:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:31:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:31:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:31:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:31:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126954702844 of space, bias 1.0, pg target 0.19983380864108533 quantized to 32 (current 32)
Dec  3 16:31:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:31:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.8006447912962508e-06 of space, bias 4.0, pg target 0.002160773749555501 quantized to 16 (current 16)
Dec  3 16:31:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:31:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:31:27 np0005544708 ceph-mon[75204]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Dec  3 16:31:27 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:31:27.815821) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 16:31:27 np0005544708 ceph-mon[75204]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Dec  3 16:31:27 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797487815910, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1377, "num_deletes": 251, "total_data_size": 1430002, "memory_usage": 1455984, "flush_reason": "Manual Compaction"}
Dec  3 16:31:27 np0005544708 ceph-mon[75204]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Dec  3 16:31:27 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797487828912, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 1399605, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14336, "largest_seqno": 15712, "table_properties": {"data_size": 1393106, "index_size": 3702, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13562, "raw_average_key_size": 19, "raw_value_size": 1379946, "raw_average_value_size": 2014, "num_data_blocks": 169, "num_entries": 685, "num_filter_entries": 685, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764797351, "oldest_key_time": 1764797351, "file_creation_time": 1764797487, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d83d641b-0db7-44b5-9540-349f4c36f664", "db_session_id": "YRQHTOJ9E78VAMDNI6U1", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec  3 16:31:27 np0005544708 ceph-mon[75204]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 13136 microseconds, and 7549 cpu microseconds.
Dec  3 16:31:27 np0005544708 ceph-mon[75204]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 16:31:27 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:31:27.828967) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 1399605 bytes OK
Dec  3 16:31:27 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:31:27.828990) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Dec  3 16:31:27 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:31:27.830187) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Dec  3 16:31:27 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:31:27.830207) EVENT_LOG_v1 {"time_micros": 1764797487830202, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 16:31:27 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:31:27.830229) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 16:31:27 np0005544708 ceph-mon[75204]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 1423868, prev total WAL file size 1423868, number of live WAL files 2.
Dec  3 16:31:27 np0005544708 ceph-mon[75204]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 16:31:27 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:31:27.831061) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Dec  3 16:31:27 np0005544708 ceph-mon[75204]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 16:31:27 np0005544708 ceph-mon[75204]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(1366KB)], [35(5495KB)]
Dec  3 16:31:27 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797487831102, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 7026676, "oldest_snapshot_seqno": -1}
Dec  3 16:31:27 np0005544708 ceph-mon[75204]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 3530 keys, 5817987 bytes, temperature: kUnknown
Dec  3 16:31:27 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797487875121, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 5817987, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5791426, "index_size": 16659, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8837, "raw_key_size": 83481, "raw_average_key_size": 23, "raw_value_size": 5724939, "raw_average_value_size": 1621, "num_data_blocks": 718, "num_entries": 3530, "num_filter_entries": 3530, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764796079, "oldest_key_time": 0, "file_creation_time": 1764797487, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d83d641b-0db7-44b5-9540-349f4c36f664", "db_session_id": "YRQHTOJ9E78VAMDNI6U1", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Dec  3 16:31:27 np0005544708 ceph-mon[75204]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 16:31:27 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:31:27.875398) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 5817987 bytes
Dec  3 16:31:27 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:31:27.876970) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 159.3 rd, 131.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 5.4 +0.0 blob) out(5.5 +0.0 blob), read-write-amplify(9.2) write-amplify(4.2) OK, records in: 4048, records dropped: 518 output_compression: NoCompression
Dec  3 16:31:27 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:31:27.876998) EVENT_LOG_v1 {"time_micros": 1764797487876984, "job": 16, "event": "compaction_finished", "compaction_time_micros": 44107, "compaction_time_cpu_micros": 16663, "output_level": 6, "num_output_files": 1, "total_output_size": 5817987, "num_input_records": 4048, "num_output_records": 3530, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 16:31:27 np0005544708 ceph-mon[75204]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 16:31:27 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797487877534, "job": 16, "event": "table_file_deletion", "file_number": 37}
Dec  3 16:31:27 np0005544708 ceph-mon[75204]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 16:31:27 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797487879215, "job": 16, "event": "table_file_deletion", "file_number": 35}
Dec  3 16:31:27 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:31:27.830907) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:31:27 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:31:27.879321) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:31:27 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:31:27.879328) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:31:27 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:31:27.879331) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:31:27 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:31:27.879334) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:31:27 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:31:27.879337) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:31:28 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v751: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:31:30 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v752: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:31:31 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:31:32 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v753: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:31:32 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Dec  3 16:31:32 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Dec  3 16:31:32 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Dec  3 16:31:34 np0005544708 podman[245183]: 2025-12-03 21:31:34.120591275 +0000 UTC m=+0.055303993 container health_status 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  3 16:31:34 np0005544708 podman[245184]: 2025-12-03 21:31:34.126332768 +0000 UTC m=+0.056846923 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  3 16:31:34 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v755: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:31:36 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:31:36 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v756: 177 pgs: 177 active+clean; 49 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 821 KiB/s wr, 21 op/s
Dec  3 16:31:37 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Dec  3 16:31:37 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Dec  3 16:31:37 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Dec  3 16:31:38 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v758: 177 pgs: 177 active+clean; 49 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.0 MiB/s wr, 27 op/s
Dec  3 16:31:38 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Dec  3 16:31:38 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Dec  3 16:31:38 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Dec  3 16:31:39 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Dec  3 16:31:39 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Dec  3 16:31:39 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Dec  3 16:31:40 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v761: 177 pgs: 177 active+clean; 49 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.3 MiB/s wr, 36 op/s
Dec  3 16:31:41 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:31:42 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v762: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 17 MiB/s wr, 154 op/s
Dec  3 16:31:42 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Dec  3 16:31:42 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Dec  3 16:31:42 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Dec  3 16:31:44 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v764: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 17 MiB/s wr, 154 op/s
Dec  3 16:31:44 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Dec  3 16:31:44 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Dec  3 16:31:44 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Dec  3 16:31:45 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Dec  3 16:31:45 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Dec  3 16:31:45 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Dec  3 16:31:46 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:31:46 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Dec  3 16:31:46 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Dec  3 16:31:46 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Dec  3 16:31:46 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v768: 177 pgs: 6 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 167 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 135 KiB/s rd, 12 KiB/s wr, 188 op/s
Dec  3 16:31:48 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v769: 177 pgs: 6 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 167 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 9.4 KiB/s wr, 144 op/s
Dec  3 16:31:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:31:48.935 151937 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:31:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:31:48.936 151937 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:31:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:31:48.936 151937 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:31:50 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v770: 177 pgs: 6 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 167 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 8.2 KiB/s wr, 125 op/s
Dec  3 16:31:50 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Dec  3 16:31:50 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Dec  3 16:31:50 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Dec  3 16:31:51 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:31:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:31:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:31:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:31:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:31:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:31:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:31:52 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v772: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 118 KiB/s rd, 10 KiB/s wr, 159 op/s
Dec  3 16:31:52 np0005544708 podman[245219]: 2025-12-03 21:31:52.201376053 +0000 UTC m=+0.131203695 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec  3 16:31:52 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Dec  3 16:31:52 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Dec  3 16:31:52 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Dec  3 16:31:53 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Dec  3 16:31:54 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Dec  3 16:31:54 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Dec  3 16:31:54 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v775: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 6.5 KiB/s wr, 92 op/s
Dec  3 16:31:55 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Dec  3 16:31:55 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Dec  3 16:31:55 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Dec  3 16:31:56 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Dec  3 16:31:56 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Dec  3 16:31:56 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Dec  3 16:31:56 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:31:56 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Dec  3 16:31:56 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Dec  3 16:31:56 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Dec  3 16:31:56 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v779: 177 pgs: 177 active+clean; 41 MiB data, 140 MiB used, 60 GiB / 60 GiB avail; 109 KiB/s rd, 19 KiB/s wr, 157 op/s
Dec  3 16:31:57 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Dec  3 16:31:57 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Dec  3 16:31:57 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Dec  3 16:31:58 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v781: 177 pgs: 177 active+clean; 41 MiB data, 140 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 15 KiB/s wr, 124 op/s
Dec  3 16:31:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  3 16:31:59 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1121350231' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec  3 16:31:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  3 16:31:59 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1121350231' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec  3 16:32:00 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v782: 177 pgs: 177 active+clean; 41 MiB data, 140 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 11 KiB/s wr, 96 op/s
Dec  3 16:32:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Dec  3 16:32:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Dec  3 16:32:01 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Dec  3 16:32:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:32:02 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Dec  3 16:32:02 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Dec  3 16:32:02 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Dec  3 16:32:02 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v785: 177 pgs: 177 active+clean; 41 MiB data, 140 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 5.7 KiB/s wr, 51 op/s
Dec  3 16:32:03 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Dec  3 16:32:03 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Dec  3 16:32:03 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Dec  3 16:32:04 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Dec  3 16:32:04 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Dec  3 16:32:04 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Dec  3 16:32:04 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v788: 177 pgs: 177 active+clean; 41 MiB data, 140 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 8.5 KiB/s wr, 77 op/s
Dec  3 16:32:05 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Dec  3 16:32:05 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Dec  3 16:32:05 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Dec  3 16:32:05 np0005544708 podman[245246]: 2025-12-03 21:32:05.192005222 +0000 UTC m=+0.113381397 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  3 16:32:05 np0005544708 podman[245245]: 2025-12-03 21:32:05.214930347 +0000 UTC m=+0.141316656 container health_status 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  3 16:32:06 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Dec  3 16:32:06 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Dec  3 16:32:06 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Dec  3 16:32:06 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:32:06 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v791: 177 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 172 active+clean; 41 MiB data, 141 MiB used, 60 GiB / 60 GiB avail; 221 KiB/s rd, 16 KiB/s wr, 302 op/s
Dec  3 16:32:08 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v792: 177 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 172 active+clean; 41 MiB data, 141 MiB used, 60 GiB / 60 GiB avail; 174 KiB/s rd, 13 KiB/s wr, 238 op/s
Dec  3 16:32:10 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v793: 177 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 172 active+clean; 41 MiB data, 141 MiB used, 60 GiB / 60 GiB avail; 146 KiB/s rd, 11 KiB/s wr, 199 op/s
Dec  3 16:32:11 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:32:11 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Dec  3 16:32:11 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Dec  3 16:32:11 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Dec  3 16:32:12 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:32:12.175 151937 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:b3:fa', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:85:3a:67:f5:74'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 16:32:12 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v795: 177 pgs: 177 active+clean; 41 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 153 KiB/s rd, 12 KiB/s wr, 210 op/s
Dec  3 16:32:12 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:32:12.177 151937 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 16:32:12 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:32:12.179 151937 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f27c01e7-5b62-4209-a664-3ae50b74644d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 16:32:14 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v796: 177 pgs: 177 active+clean; 41 MiB data, 158 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.2 KiB/s wr, 34 op/s
Dec  3 16:32:15 np0005544708 podman[245380]: 2025-12-03 21:32:15.266916855 +0000 UTC m=+0.089229790 container exec 5be1cf87f4450717250955c63021ef41dba8c1099adb634f3e1f6a0b130fa916 (image=quay.io/ceph/ceph:v20, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:32:15 np0005544708 podman[245380]: 2025-12-03 21:32:15.380114326 +0000 UTC m=+0.202427221 container exec_died 5be1cf87f4450717250955c63021ef41dba8c1099adb634f3e1f6a0b130fa916 (image=quay.io/ceph/ceph:v20, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mon-compute-0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:32:16 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:32:16 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Dec  3 16:32:16 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Dec  3 16:32:16 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Dec  3 16:32:16 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v798: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 3.6 KiB/s wr, 46 op/s
Dec  3 16:32:16 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:32:16 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:32:16 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:32:16 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:32:16 np0005544708 nova_compute[241566]: 2025-12-03 21:32:16.551 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:32:16 np0005544708 nova_compute[241566]: 2025-12-03 21:32:16.553 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  3 16:32:16 np0005544708 nova_compute[241566]: 2025-12-03 21:32:16.567 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  3 16:32:16 np0005544708 nova_compute[241566]: 2025-12-03 21:32:16.568 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:32:16 np0005544708 nova_compute[241566]: 2025-12-03 21:32:16.569 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  3 16:32:16 np0005544708 nova_compute[241566]: 2025-12-03 21:32:16.580 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:32:17 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:32:17 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:32:17 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:32:17 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:32:17 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  3 16:32:17 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:32:17 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  3 16:32:17 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:32:17 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  3 16:32:17 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  3 16:32:17 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  3 16:32:17 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:32:17 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:32:17 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:32:17 np0005544708 podman[245691]: 2025-12-03 21:32:17.785376666 +0000 UTC m=+0.065078273 container create 6795932662b1b6635465de63b60b8271ebdeb997a3e309f4aabda20089bdb8f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:32:17 np0005544708 systemd[1]: Started libpod-conmon-6795932662b1b6635465de63b60b8271ebdeb997a3e309f4aabda20089bdb8f4.scope.
Dec  3 16:32:17 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:32:17 np0005544708 podman[245691]: 2025-12-03 21:32:17.759001379 +0000 UTC m=+0.038703046 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:32:17 np0005544708 podman[245691]: 2025-12-03 21:32:17.871144273 +0000 UTC m=+0.150845950 container init 6795932662b1b6635465de63b60b8271ebdeb997a3e309f4aabda20089bdb8f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_lalande, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  3 16:32:17 np0005544708 podman[245691]: 2025-12-03 21:32:17.879248479 +0000 UTC m=+0.158950097 container start 6795932662b1b6635465de63b60b8271ebdeb997a3e309f4aabda20089bdb8f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:32:17 np0005544708 podman[245691]: 2025-12-03 21:32:17.883008381 +0000 UTC m=+0.162709998 container attach 6795932662b1b6635465de63b60b8271ebdeb997a3e309f4aabda20089bdb8f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec  3 16:32:17 np0005544708 ecstatic_lalande[245708]: 167 167
Dec  3 16:32:17 np0005544708 systemd[1]: libpod-6795932662b1b6635465de63b60b8271ebdeb997a3e309f4aabda20089bdb8f4.scope: Deactivated successfully.
Dec  3 16:32:17 np0005544708 conmon[245708]: conmon 6795932662b1b6635465 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6795932662b1b6635465de63b60b8271ebdeb997a3e309f4aabda20089bdb8f4.scope/container/memory.events
Dec  3 16:32:17 np0005544708 podman[245691]: 2025-12-03 21:32:17.887044778 +0000 UTC m=+0.166746395 container died 6795932662b1b6635465de63b60b8271ebdeb997a3e309f4aabda20089bdb8f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_lalande, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:32:17 np0005544708 systemd[1]: var-lib-containers-storage-overlay-4b62ab10f2b7cfee39bdb7e841b273aa26926990870153e2ce1ca765ee7a3cdf-merged.mount: Deactivated successfully.
Dec  3 16:32:17 np0005544708 podman[245691]: 2025-12-03 21:32:17.941204119 +0000 UTC m=+0.220905696 container remove 6795932662b1b6635465de63b60b8271ebdeb997a3e309f4aabda20089bdb8f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_lalande, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:32:17 np0005544708 systemd[1]: libpod-conmon-6795932662b1b6635465de63b60b8271ebdeb997a3e309f4aabda20089bdb8f4.scope: Deactivated successfully.
Dec  3 16:32:18 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:32:18 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:32:18 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:32:18 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v799: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 3.6 KiB/s wr, 46 op/s
Dec  3 16:32:18 np0005544708 podman[245732]: 2025-12-03 21:32:18.189768725 +0000 UTC m=+0.056696689 container create 91d0740a84b35eec3d8e9d46f1de178dfb82b0dbdc35ce46961120f6af5c39b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_bassi, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 16:32:18 np0005544708 systemd[1]: Started libpod-conmon-91d0740a84b35eec3d8e9d46f1de178dfb82b0dbdc35ce46961120f6af5c39b4.scope.
Dec  3 16:32:18 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:32:18 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ad5855244fc81a1f9e5d9daae6f4a911ee79b229f376b15e559c4d841220c80/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:32:18 np0005544708 podman[245732]: 2025-12-03 21:32:18.172352469 +0000 UTC m=+0.039280433 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:32:18 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ad5855244fc81a1f9e5d9daae6f4a911ee79b229f376b15e559c4d841220c80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:32:18 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ad5855244fc81a1f9e5d9daae6f4a911ee79b229f376b15e559c4d841220c80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:32:18 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ad5855244fc81a1f9e5d9daae6f4a911ee79b229f376b15e559c4d841220c80/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:32:18 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ad5855244fc81a1f9e5d9daae6f4a911ee79b229f376b15e559c4d841220c80/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:32:18 np0005544708 podman[245732]: 2025-12-03 21:32:18.287464671 +0000 UTC m=+0.154392655 container init 91d0740a84b35eec3d8e9d46f1de178dfb82b0dbdc35ce46961120f6af5c39b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:32:18 np0005544708 podman[245732]: 2025-12-03 21:32:18.298444085 +0000 UTC m=+0.165372029 container start 91d0740a84b35eec3d8e9d46f1de178dfb82b0dbdc35ce46961120f6af5c39b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_bassi, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:32:18 np0005544708 podman[245732]: 2025-12-03 21:32:18.302167165 +0000 UTC m=+0.169095159 container attach 91d0740a84b35eec3d8e9d46f1de178dfb82b0dbdc35ce46961120f6af5c39b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_bassi, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:32:18 np0005544708 nova_compute[241566]: 2025-12-03 21:32:18.582 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:32:18 np0005544708 cool_bassi[245748]: --> passed data devices: 0 physical, 3 LVM
Dec  3 16:32:18 np0005544708 cool_bassi[245748]: --> All data devices are unavailable
Dec  3 16:32:18 np0005544708 systemd[1]: libpod-91d0740a84b35eec3d8e9d46f1de178dfb82b0dbdc35ce46961120f6af5c39b4.scope: Deactivated successfully.
Dec  3 16:32:18 np0005544708 podman[245768]: 2025-12-03 21:32:18.921438658 +0000 UTC m=+0.030020434 container died 91d0740a84b35eec3d8e9d46f1de178dfb82b0dbdc35ce46961120f6af5c39b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_bassi, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 16:32:18 np0005544708 systemd[1]: var-lib-containers-storage-overlay-8ad5855244fc81a1f9e5d9daae6f4a911ee79b229f376b15e559c4d841220c80-merged.mount: Deactivated successfully.
Dec  3 16:32:18 np0005544708 podman[245768]: 2025-12-03 21:32:18.967496772 +0000 UTC m=+0.076078448 container remove 91d0740a84b35eec3d8e9d46f1de178dfb82b0dbdc35ce46961120f6af5c39b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Dec  3 16:32:18 np0005544708 systemd[1]: libpod-conmon-91d0740a84b35eec3d8e9d46f1de178dfb82b0dbdc35ce46961120f6af5c39b4.scope: Deactivated successfully.
Dec  3 16:32:19 np0005544708 podman[245846]: 2025-12-03 21:32:19.498113081 +0000 UTC m=+0.050175735 container create d2fed066020d5eaa6beb940ee4e6aac1b57f16e0b68d80f37f918e1c4626860c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_kapitsa, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 16:32:19 np0005544708 systemd[1]: Started libpod-conmon-d2fed066020d5eaa6beb940ee4e6aac1b57f16e0b68d80f37f918e1c4626860c.scope.
Dec  3 16:32:19 np0005544708 nova_compute[241566]: 2025-12-03 21:32:19.551 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:32:19 np0005544708 nova_compute[241566]: 2025-12-03 21:32:19.552 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:32:19 np0005544708 nova_compute[241566]: 2025-12-03 21:32:19.552 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 16:32:19 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:32:19 np0005544708 podman[245846]: 2025-12-03 21:32:19.476269536 +0000 UTC m=+0.028332190 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:32:19 np0005544708 podman[245846]: 2025-12-03 21:32:19.583137108 +0000 UTC m=+0.135199762 container init d2fed066020d5eaa6beb940ee4e6aac1b57f16e0b68d80f37f918e1c4626860c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_kapitsa, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:32:19 np0005544708 podman[245846]: 2025-12-03 21:32:19.594830221 +0000 UTC m=+0.146892845 container start d2fed066020d5eaa6beb940ee4e6aac1b57f16e0b68d80f37f918e1c4626860c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_kapitsa, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  3 16:32:19 np0005544708 podman[245846]: 2025-12-03 21:32:19.5981539 +0000 UTC m=+0.150216554 container attach d2fed066020d5eaa6beb940ee4e6aac1b57f16e0b68d80f37f918e1c4626860c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_kapitsa, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  3 16:32:19 np0005544708 infallible_kapitsa[245862]: 167 167
Dec  3 16:32:19 np0005544708 systemd[1]: libpod-d2fed066020d5eaa6beb940ee4e6aac1b57f16e0b68d80f37f918e1c4626860c.scope: Deactivated successfully.
Dec  3 16:32:19 np0005544708 conmon[245862]: conmon d2fed066020d5eaa6beb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d2fed066020d5eaa6beb940ee4e6aac1b57f16e0b68d80f37f918e1c4626860c.scope/container/memory.events
Dec  3 16:32:19 np0005544708 podman[245846]: 2025-12-03 21:32:19.600339509 +0000 UTC m=+0.152402163 container died d2fed066020d5eaa6beb940ee4e6aac1b57f16e0b68d80f37f918e1c4626860c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec  3 16:32:19 np0005544708 systemd[1]: var-lib-containers-storage-overlay-83d8fe3b02d6e340e19526364f565cb8f215d725cc8e8f969e514759f451cb74-merged.mount: Deactivated successfully.
Dec  3 16:32:19 np0005544708 podman[245846]: 2025-12-03 21:32:19.645345934 +0000 UTC m=+0.197408588 container remove d2fed066020d5eaa6beb940ee4e6aac1b57f16e0b68d80f37f918e1c4626860c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:32:19 np0005544708 systemd[1]: libpod-conmon-d2fed066020d5eaa6beb940ee4e6aac1b57f16e0b68d80f37f918e1c4626860c.scope: Deactivated successfully.
Dec  3 16:32:19 np0005544708 podman[245885]: 2025-12-03 21:32:19.859133948 +0000 UTC m=+0.066389249 container create ae5e23684d57df630a78d669321a4cd7698c6ff9a7ea250fa0eda57f8187cbcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec  3 16:32:19 np0005544708 systemd[1]: Started libpod-conmon-ae5e23684d57df630a78d669321a4cd7698c6ff9a7ea250fa0eda57f8187cbcc.scope.
Dec  3 16:32:19 np0005544708 podman[245885]: 2025-12-03 21:32:19.834851648 +0000 UTC m=+0.042106979 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:32:19 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:32:19 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06b4fd392b550d2ed9112c9cdfbbbbb9ad63bf3c806ac7531b3c51e0bbe25293/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:32:19 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06b4fd392b550d2ed9112c9cdfbbbbb9ad63bf3c806ac7531b3c51e0bbe25293/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:32:19 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06b4fd392b550d2ed9112c9cdfbbbbb9ad63bf3c806ac7531b3c51e0bbe25293/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:32:19 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06b4fd392b550d2ed9112c9cdfbbbbb9ad63bf3c806ac7531b3c51e0bbe25293/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:32:19 np0005544708 podman[245885]: 2025-12-03 21:32:19.970194143 +0000 UTC m=+0.177449504 container init ae5e23684d57df630a78d669321a4cd7698c6ff9a7ea250fa0eda57f8187cbcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_easley, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:32:19 np0005544708 podman[245885]: 2025-12-03 21:32:19.98351557 +0000 UTC m=+0.190770841 container start ae5e23684d57df630a78d669321a4cd7698c6ff9a7ea250fa0eda57f8187cbcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  3 16:32:19 np0005544708 podman[245885]: 2025-12-03 21:32:19.987005623 +0000 UTC m=+0.194260904 container attach ae5e23684d57df630a78d669321a4cd7698c6ff9a7ea250fa0eda57f8187cbcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_easley, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:32:20 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v800: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s rd, 1.2 KiB/s wr, 10 op/s
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]: {
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:    "0": [
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:        {
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:            "devices": [
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "/dev/loop3"
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:            ],
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:            "lv_name": "ceph_lv0",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:            "lv_size": "21470642176",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d33bc95-baf8-481d-bc78-3b15ffd29872,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:            "lv_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:            "name": "ceph_lv0",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:            "tags": {
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.block_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.cluster_name": "ceph",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.crush_device_class": "",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.encrypted": "0",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.objectstore": "bluestore",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.osd_fsid": "4d33bc95-baf8-481d-bc78-3b15ffd29872",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.osd_id": "0",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.type": "block",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.vdo": "0",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.with_tpm": "0"
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:            },
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:            "type": "block",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:            "vg_name": "ceph_vg0"
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:        }
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:    ],
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:    "1": [
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:        {
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:            "devices": [
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "/dev/loop4"
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:            ],
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:            "lv_name": "ceph_lv1",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:            "lv_size": "21470642176",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c4086f1b-ff53-4e63-8dc0-011238d77976,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:            "lv_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:            "name": "ceph_lv1",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:            "tags": {
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.block_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.cluster_name": "ceph",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.crush_device_class": "",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.encrypted": "0",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.objectstore": "bluestore",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.osd_fsid": "c4086f1b-ff53-4e63-8dc0-011238d77976",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.osd_id": "1",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.type": "block",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.vdo": "0",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.with_tpm": "0"
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:            },
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:            "type": "block",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:            "vg_name": "ceph_vg1"
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:        }
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:    ],
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:    "2": [
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:        {
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:            "devices": [
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "/dev/loop5"
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:            ],
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:            "lv_name": "ceph_lv2",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:            "lv_size": "21470642176",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=abcd6a67-9013-4470-978f-f75da5f33cd4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:            "lv_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:            "name": "ceph_lv2",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:            "tags": {
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.block_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.cluster_name": "ceph",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.crush_device_class": "",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.encrypted": "0",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.objectstore": "bluestore",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.osd_fsid": "abcd6a67-9013-4470-978f-f75da5f33cd4",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.osd_id": "2",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.type": "block",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.vdo": "0",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:                "ceph.with_tpm": "0"
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:            },
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:            "type": "block",
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:            "vg_name": "ceph_vg2"
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:        }
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]:    ]
Dec  3 16:32:20 np0005544708 affectionate_easley[245902]: }
Dec  3 16:32:20 np0005544708 systemd[1]: libpod-ae5e23684d57df630a78d669321a4cd7698c6ff9a7ea250fa0eda57f8187cbcc.scope: Deactivated successfully.
Dec  3 16:32:20 np0005544708 podman[245885]: 2025-12-03 21:32:20.316548917 +0000 UTC m=+0.523804228 container died ae5e23684d57df630a78d669321a4cd7698c6ff9a7ea250fa0eda57f8187cbcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  3 16:32:20 np0005544708 systemd[1]: var-lib-containers-storage-overlay-06b4fd392b550d2ed9112c9cdfbbbbb9ad63bf3c806ac7531b3c51e0bbe25293-merged.mount: Deactivated successfully.
Dec  3 16:32:20 np0005544708 podman[245885]: 2025-12-03 21:32:20.36891486 +0000 UTC m=+0.576170191 container remove ae5e23684d57df630a78d669321a4cd7698c6ff9a7ea250fa0eda57f8187cbcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_easley, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:32:20 np0005544708 systemd[1]: libpod-conmon-ae5e23684d57df630a78d669321a4cd7698c6ff9a7ea250fa0eda57f8187cbcc.scope: Deactivated successfully.
Dec  3 16:32:20 np0005544708 nova_compute[241566]: 2025-12-03 21:32:20.552 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:32:20 np0005544708 nova_compute[241566]: 2025-12-03 21:32:20.553 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 16:32:20 np0005544708 nova_compute[241566]: 2025-12-03 21:32:20.553 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 16:32:20 np0005544708 nova_compute[241566]: 2025-12-03 21:32:20.569 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 16:32:20 np0005544708 nova_compute[241566]: 2025-12-03 21:32:20.569 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:32:20 np0005544708 podman[245985]: 2025-12-03 21:32:20.973252383 +0000 UTC m=+0.074335802 container create c9d8df0660a64aded10681a0d05fefbed1c840244a55acc597e035aa02484d3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:32:21 np0005544708 systemd[1]: Started libpod-conmon-c9d8df0660a64aded10681a0d05fefbed1c840244a55acc597e035aa02484d3a.scope.
Dec  3 16:32:21 np0005544708 podman[245985]: 2025-12-03 21:32:20.938647157 +0000 UTC m=+0.039730636 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:32:21 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:32:21 np0005544708 podman[245985]: 2025-12-03 21:32:21.085789307 +0000 UTC m=+0.186872786 container init c9d8df0660a64aded10681a0d05fefbed1c840244a55acc597e035aa02484d3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_germain, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True)
Dec  3 16:32:21 np0005544708 podman[245985]: 2025-12-03 21:32:21.095499937 +0000 UTC m=+0.196583346 container start c9d8df0660a64aded10681a0d05fefbed1c840244a55acc597e035aa02484d3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec  3 16:32:21 np0005544708 podman[245985]: 2025-12-03 21:32:21.099463393 +0000 UTC m=+0.200546862 container attach c9d8df0660a64aded10681a0d05fefbed1c840244a55acc597e035aa02484d3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_germain, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:32:21 np0005544708 exciting_germain[246001]: 167 167
Dec  3 16:32:21 np0005544708 systemd[1]: libpod-c9d8df0660a64aded10681a0d05fefbed1c840244a55acc597e035aa02484d3a.scope: Deactivated successfully.
Dec  3 16:32:21 np0005544708 podman[245985]: 2025-12-03 21:32:21.100454749 +0000 UTC m=+0.201538128 container died c9d8df0660a64aded10681a0d05fefbed1c840244a55acc597e035aa02484d3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec  3 16:32:21 np0005544708 systemd[1]: var-lib-containers-storage-overlay-6e18369da5eb15e1b4bf6e6dc8e37b6b6ceb285cc389b14c3afa6fc1c36df77b-merged.mount: Deactivated successfully.
Dec  3 16:32:21 np0005544708 podman[245985]: 2025-12-03 21:32:21.140603005 +0000 UTC m=+0.241686364 container remove c9d8df0660a64aded10681a0d05fefbed1c840244a55acc597e035aa02484d3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_germain, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:32:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:32:21 np0005544708 systemd[1]: libpod-conmon-c9d8df0660a64aded10681a0d05fefbed1c840244a55acc597e035aa02484d3a.scope: Deactivated successfully.
Dec  3 16:32:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Dec  3 16:32:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Optimize plan auto_2025-12-03_21:32:21
Dec  3 16:32:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 16:32:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] do_upmap
Dec  3 16:32:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] pools ['backups', 'volumes', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', 'images', '.mgr']
Dec  3 16:32:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] prepared 0/10 upmap changes
Dec  3 16:32:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Dec  3 16:32:21 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Dec  3 16:32:21 np0005544708 podman[246025]: 2025-12-03 21:32:21.401696037 +0000 UTC m=+0.073328005 container create 7578836eca05fb9a56cea0d847e36c1a29ea04d6c2a7feb40c80390b4f044382 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_nobel, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:32:21 np0005544708 systemd[1]: Started libpod-conmon-7578836eca05fb9a56cea0d847e36c1a29ea04d6c2a7feb40c80390b4f044382.scope.
Dec  3 16:32:21 np0005544708 podman[246025]: 2025-12-03 21:32:21.372874035 +0000 UTC m=+0.044506053 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:32:21 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:32:21 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d0465c66398b3631c6efe3b419d7929d64ce32e71fe5147d85d0528e22a331/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:32:21 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d0465c66398b3631c6efe3b419d7929d64ce32e71fe5147d85d0528e22a331/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:32:21 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d0465c66398b3631c6efe3b419d7929d64ce32e71fe5147d85d0528e22a331/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:32:21 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d0465c66398b3631c6efe3b419d7929d64ce32e71fe5147d85d0528e22a331/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:32:21 np0005544708 podman[246025]: 2025-12-03 21:32:21.528303797 +0000 UTC m=+0.199935825 container init 7578836eca05fb9a56cea0d847e36c1a29ea04d6c2a7feb40c80390b4f044382 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_nobel, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:32:21 np0005544708 podman[246025]: 2025-12-03 21:32:21.539694882 +0000 UTC m=+0.211326850 container start 7578836eca05fb9a56cea0d847e36c1a29ea04d6c2a7feb40c80390b4f044382 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:32:21 np0005544708 podman[246025]: 2025-12-03 21:32:21.544015188 +0000 UTC m=+0.215647206 container attach 7578836eca05fb9a56cea0d847e36c1a29ea04d6c2a7feb40c80390b4f044382 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_nobel, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:32:21 np0005544708 nova_compute[241566]: 2025-12-03 21:32:21.551 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:32:21 np0005544708 nova_compute[241566]: 2025-12-03 21:32:21.552 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:32:21 np0005544708 nova_compute[241566]: 2025-12-03 21:32:21.553 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:32:21 np0005544708 nova_compute[241566]: 2025-12-03 21:32:21.553 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:32:21 np0005544708 nova_compute[241566]: 2025-12-03 21:32:21.583 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:32:21 np0005544708 nova_compute[241566]: 2025-12-03 21:32:21.584 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:32:21 np0005544708 nova_compute[241566]: 2025-12-03 21:32:21.585 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:32:21 np0005544708 nova_compute[241566]: 2025-12-03 21:32:21.585 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 16:32:21 np0005544708 nova_compute[241566]: 2025-12-03 21:32:21.586 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:32:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:32:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:32:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:32:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:32:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:32:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:32:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 16:32:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:32:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 16:32:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:32:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:32:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:32:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:32:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:32:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:32:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:32:22 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  3 16:32:22 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1052663395' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  3 16:32:22 np0005544708 nova_compute[241566]: 2025-12-03 21:32:22.152 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:32:22 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v802: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.0 KiB/s wr, 27 op/s
Dec  3 16:32:22 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Dec  3 16:32:22 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Dec  3 16:32:22 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Dec  3 16:32:22 np0005544708 lvm[246152]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:32:22 np0005544708 lvm[246153]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:32:22 np0005544708 lvm[246153]: VG ceph_vg1 finished
Dec  3 16:32:22 np0005544708 lvm[246152]: VG ceph_vg2 finished
Dec  3 16:32:22 np0005544708 lvm[246150]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:32:22 np0005544708 lvm[246150]: VG ceph_vg0 finished
Dec  3 16:32:22 np0005544708 nova_compute[241566]: 2025-12-03 21:32:22.341 241570 WARNING nova.virt.libvirt.driver [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 16:32:22 np0005544708 nova_compute[241566]: 2025-12-03 21:32:22.342 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5153MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 16:32:22 np0005544708 nova_compute[241566]: 2025-12-03 21:32:22.343 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:32:22 np0005544708 nova_compute[241566]: 2025-12-03 21:32:22.343 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:32:22 np0005544708 nostalgic_nobel[246042]: {}
Dec  3 16:32:22 np0005544708 podman[246139]: 2025-12-03 21:32:22.40748474 +0000 UTC m=+0.124626439 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  3 16:32:22 np0005544708 systemd[1]: libpod-7578836eca05fb9a56cea0d847e36c1a29ea04d6c2a7feb40c80390b4f044382.scope: Deactivated successfully.
Dec  3 16:32:22 np0005544708 systemd[1]: libpod-7578836eca05fb9a56cea0d847e36c1a29ea04d6c2a7feb40c80390b4f044382.scope: Consumed 1.363s CPU time.
Dec  3 16:32:22 np0005544708 podman[246173]: 2025-12-03 21:32:22.460826659 +0000 UTC m=+0.031971248 container died 7578836eca05fb9a56cea0d847e36c1a29ea04d6c2a7feb40c80390b4f044382 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:32:22 np0005544708 systemd[1]: var-lib-containers-storage-overlay-64d0465c66398b3631c6efe3b419d7929d64ce32e71fe5147d85d0528e22a331-merged.mount: Deactivated successfully.
Dec  3 16:32:22 np0005544708 podman[246173]: 2025-12-03 21:32:22.498256861 +0000 UTC m=+0.069401430 container remove 7578836eca05fb9a56cea0d847e36c1a29ea04d6c2a7feb40c80390b4f044382 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_nobel, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 16:32:22 np0005544708 systemd[1]: libpod-conmon-7578836eca05fb9a56cea0d847e36c1a29ea04d6c2a7feb40c80390b4f044382.scope: Deactivated successfully.
Dec  3 16:32:22 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:32:22 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:32:22 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:32:22 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:32:22 np0005544708 nova_compute[241566]: 2025-12-03 21:32:22.576 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 16:32:22 np0005544708 nova_compute[241566]: 2025-12-03 21:32:22.577 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 16:32:22 np0005544708 nova_compute[241566]: 2025-12-03 21:32:22.640 241570 DEBUG nova.scheduler.client.report [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Refreshing inventories for resource provider 94aba67c-5c5e-45d0-83d1-33eb467c8775 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  3 16:32:22 np0005544708 nova_compute[241566]: 2025-12-03 21:32:22.718 241570 DEBUG nova.scheduler.client.report [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Updating ProviderTree inventory for provider 94aba67c-5c5e-45d0-83d1-33eb467c8775 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  3 16:32:22 np0005544708 nova_compute[241566]: 2025-12-03 21:32:22.719 241570 DEBUG nova.compute.provider_tree [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Updating inventory in ProviderTree for provider 94aba67c-5c5e-45d0-83d1-33eb467c8775 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  3 16:32:22 np0005544708 nova_compute[241566]: 2025-12-03 21:32:22.736 241570 DEBUG nova.scheduler.client.report [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Refreshing aggregate associations for resource provider 94aba67c-5c5e-45d0-83d1-33eb467c8775, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  3 16:32:22 np0005544708 nova_compute[241566]: 2025-12-03 21:32:22.767 241570 DEBUG nova.scheduler.client.report [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Refreshing trait associations for resource provider 94aba67c-5c5e-45d0-83d1-33eb467c8775, traits: HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AESNI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_ABM,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_FMA3,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  3 16:32:22 np0005544708 nova_compute[241566]: 2025-12-03 21:32:22.781 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:32:23 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  3 16:32:23 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3524126243' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  3 16:32:23 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:32:23 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:32:23 np0005544708 nova_compute[241566]: 2025-12-03 21:32:23.462 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.681s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:32:23 np0005544708 nova_compute[241566]: 2025-12-03 21:32:23.471 241570 DEBUG nova.compute.provider_tree [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Inventory has not changed in ProviderTree for provider: 94aba67c-5c5e-45d0-83d1-33eb467c8775 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 16:32:23 np0005544708 nova_compute[241566]: 2025-12-03 21:32:23.491 241570 DEBUG nova.scheduler.client.report [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Inventory has not changed for provider 94aba67c-5c5e-45d0-83d1-33eb467c8775 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 16:32:23 np0005544708 nova_compute[241566]: 2025-12-03 21:32:23.492 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 16:32:23 np0005544708 nova_compute[241566]: 2025-12-03 21:32:23.493 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:32:24 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v804: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 639 B/s wr, 15 op/s
Dec  3 16:32:26 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:32:26 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v805: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 2.5 KiB/s wr, 37 op/s
Dec  3 16:32:27 np0005544708 nova_compute[241566]: 2025-12-03 21:32:27.332 241570 DEBUG oslo_concurrency.lockutils [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Acquiring lock "ab23bcbe-2091-4277-8f17-e9554b017c36" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:32:27 np0005544708 nova_compute[241566]: 2025-12-03 21:32:27.333 241570 DEBUG oslo_concurrency.lockutils [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Lock "ab23bcbe-2091-4277-8f17-e9554b017c36" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:32:27 np0005544708 nova_compute[241566]: 2025-12-03 21:32:27.367 241570 DEBUG nova.compute.manager [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  3 16:32:27 np0005544708 nova_compute[241566]: 2025-12-03 21:32:27.499 241570 DEBUG oslo_concurrency.lockutils [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:32:27 np0005544708 nova_compute[241566]: 2025-12-03 21:32:27.500 241570 DEBUG oslo_concurrency.lockutils [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:32:27 np0005544708 nova_compute[241566]: 2025-12-03 21:32:27.508 241570 DEBUG nova.virt.hardware [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  3 16:32:27 np0005544708 nova_compute[241566]: 2025-12-03 21:32:27.509 241570 INFO nova.compute.claims [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  3 16:32:27 np0005544708 nova_compute[241566]: 2025-12-03 21:32:27.624 241570 DEBUG oslo_concurrency.processutils [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:32:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 16:32:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:32:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 16:32:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:32:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:32:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:32:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.9986981750039173e-07 of space, bias 1.0, pg target 5.996094525011752e-05 quantized to 32 (current 32)
Dec  3 16:32:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:32:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:32:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:32:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006677743597888245 of space, bias 1.0, pg target 0.20033230793664736 quantized to 32 (current 32)
Dec  3 16:32:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:32:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1589623851345146e-06 of space, bias 4.0, pg target 0.0013907548621614175 quantized to 16 (current 16)
Dec  3 16:32:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:32:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:32:28 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  3 16:32:28 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2052140997' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  3 16:32:28 np0005544708 nova_compute[241566]: 2025-12-03 21:32:28.157 241570 DEBUG oslo_concurrency.processutils [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:32:28 np0005544708 nova_compute[241566]: 2025-12-03 21:32:28.163 241570 DEBUG nova.compute.provider_tree [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Inventory has not changed in ProviderTree for provider: 94aba67c-5c5e-45d0-83d1-33eb467c8775 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 16:32:28 np0005544708 nova_compute[241566]: 2025-12-03 21:32:28.182 241570 DEBUG nova.scheduler.client.report [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Inventory has not changed for provider 94aba67c-5c5e-45d0-83d1-33eb467c8775 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 16:32:28 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v806: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 2.5 KiB/s wr, 37 op/s
Dec  3 16:32:28 np0005544708 nova_compute[241566]: 2025-12-03 21:32:28.206 241570 DEBUG oslo_concurrency.lockutils [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:32:28 np0005544708 nova_compute[241566]: 2025-12-03 21:32:28.206 241570 DEBUG nova.compute.manager [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  3 16:32:28 np0005544708 nova_compute[241566]: 2025-12-03 21:32:28.269 241570 DEBUG nova.compute.manager [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  3 16:32:28 np0005544708 nova_compute[241566]: 2025-12-03 21:32:28.269 241570 DEBUG nova.network.neutron [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  3 16:32:28 np0005544708 nova_compute[241566]: 2025-12-03 21:32:28.305 241570 INFO nova.virt.libvirt.driver [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  3 16:32:28 np0005544708 nova_compute[241566]: 2025-12-03 21:32:28.323 241570 DEBUG nova.compute.manager [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  3 16:32:28 np0005544708 nova_compute[241566]: 2025-12-03 21:32:28.363 241570 INFO nova.virt.block_device [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Booting with volume 74f6cb4b-c1f6-4650-97bb-811b731c0960 at /dev/vda#033[00m
Dec  3 16:32:28 np0005544708 nova_compute[241566]: 2025-12-03 21:32:28.822 241570 DEBUG os_brick.utils [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Dec  3 16:32:28 np0005544708 nova_compute[241566]: 2025-12-03 21:32:28.824 241570 INFO oslo.privsep.daemon [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmp82yjnbzm/privsep.sock']#033[00m
Dec  3 16:32:28 np0005544708 nova_compute[241566]: 2025-12-03 21:32:28.956 241570 DEBUG nova.network.neutron [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Dec  3 16:32:28 np0005544708 nova_compute[241566]: 2025-12-03 21:32:28.957 241570 DEBUG nova.compute.manager [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  3 16:32:29 np0005544708 nova_compute[241566]: 2025-12-03 21:32:29.588 241570 INFO oslo.privsep.daemon [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Dec  3 16:32:29 np0005544708 nova_compute[241566]: 2025-12-03 21:32:29.447 246262 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  3 16:32:29 np0005544708 nova_compute[241566]: 2025-12-03 21:32:29.455 246262 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  3 16:32:29 np0005544708 nova_compute[241566]: 2025-12-03 21:32:29.460 246262 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Dec  3 16:32:29 np0005544708 nova_compute[241566]: 2025-12-03 21:32:29.461 246262 INFO oslo.privsep.daemon [-] privsep daemon running as pid 246262#033[00m
Dec  3 16:32:29 np0005544708 nova_compute[241566]: 2025-12-03 21:32:29.591 246262 DEBUG oslo.privsep.daemon [-] privsep: reply[1895525e-2aab-48de-a2ca-51b3dfe3df0b]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 16:32:29 np0005544708 nova_compute[241566]: 2025-12-03 21:32:29.711 246262 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:32:29 np0005544708 nova_compute[241566]: 2025-12-03 21:32:29.722 246262 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:32:29 np0005544708 nova_compute[241566]: 2025-12-03 21:32:29.723 246262 DEBUG oslo.privsep.daemon [-] privsep: reply[7d88742c-bd70-4052-b586-9b8ae30e491f]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 16:32:29 np0005544708 nova_compute[241566]: 2025-12-03 21:32:29.724 246262 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:32:29 np0005544708 nova_compute[241566]: 2025-12-03 21:32:29.732 246262 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:32:29 np0005544708 nova_compute[241566]: 2025-12-03 21:32:29.732 246262 DEBUG oslo.privsep.daemon [-] privsep: reply[eeda5bc5-2d3c-490f-92ec-440b6f613311]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9ad6421bbbcd', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 16:32:29 np0005544708 nova_compute[241566]: 2025-12-03 21:32:29.734 246262 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:32:29 np0005544708 nova_compute[241566]: 2025-12-03 21:32:29.747 246262 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:32:29 np0005544708 nova_compute[241566]: 2025-12-03 21:32:29.748 246262 DEBUG oslo.privsep.daemon [-] privsep: reply[ddee79ae-d7ef-493f-8d41-2ab47fe31233]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 16:32:29 np0005544708 nova_compute[241566]: 2025-12-03 21:32:29.749 246262 DEBUG oslo.privsep.daemon [-] privsep: reply[6fc2d889-d255-4ab6-8358-cfdc2d408fa1]: (4, 'fe808748-0a27-4a3c-9875-a9777da5fa17') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  3 16:32:29 np0005544708 nova_compute[241566]: 2025-12-03 21:32:29.750 241570 DEBUG oslo_concurrency.processutils [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:32:29 np0005544708 nova_compute[241566]: 2025-12-03 21:32:29.774 241570 DEBUG oslo_concurrency.processutils [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:32:29 np0005544708 nova_compute[241566]: 2025-12-03 21:32:29.777 241570 DEBUG os_brick.initiator.connectors.lightos [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Dec  3 16:32:29 np0005544708 nova_compute[241566]: 2025-12-03 21:32:29.777 241570 DEBUG os_brick.initiator.connectors.lightos [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Dec  3 16:32:29 np0005544708 nova_compute[241566]: 2025-12-03 21:32:29.778 241570 DEBUG os_brick.initiator.connectors.lightos [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Dec  3 16:32:29 np0005544708 nova_compute[241566]: 2025-12-03 21:32:29.778 241570 DEBUG os_brick.utils [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] <== get_connector_properties: return (955ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9ad6421bbbcd', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'fe808748-0a27-4a3c-9875-a9777da5fa17', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Dec  3 16:32:29 np0005544708 nova_compute[241566]: 2025-12-03 21:32:29.779 241570 DEBUG nova.virt.block_device [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Updating existing volume attachment record: 4cd2ac45-ce27-450a-b9c6-58e7e1803ad9 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Dec  3 16:32:30 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v807: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.7 KiB/s wr, 20 op/s
Dec  3 16:32:30 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  3 16:32:30 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3498212241' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  3 16:32:30 np0005544708 nova_compute[241566]: 2025-12-03 21:32:30.961 241570 DEBUG nova.compute.manager [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  3 16:32:30 np0005544708 nova_compute[241566]: 2025-12-03 21:32:30.964 241570 DEBUG nova.virt.libvirt.driver [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  3 16:32:30 np0005544708 nova_compute[241566]: 2025-12-03 21:32:30.964 241570 INFO nova.virt.libvirt.driver [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Creating image(s)#033[00m
Dec  3 16:32:30 np0005544708 nova_compute[241566]: 2025-12-03 21:32:30.965 241570 DEBUG nova.virt.libvirt.driver [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Dec  3 16:32:30 np0005544708 nova_compute[241566]: 2025-12-03 21:32:30.966 241570 DEBUG nova.virt.libvirt.driver [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Ensure instance console log exists: /var/lib/nova/instances/ab23bcbe-2091-4277-8f17-e9554b017c36/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  3 16:32:30 np0005544708 nova_compute[241566]: 2025-12-03 21:32:30.966 241570 DEBUG oslo_concurrency.lockutils [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:32:30 np0005544708 nova_compute[241566]: 2025-12-03 21:32:30.967 241570 DEBUG oslo_concurrency.lockutils [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:32:30 np0005544708 nova_compute[241566]: 2025-12-03 21:32:30.967 241570 DEBUG oslo_concurrency.lockutils [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:32:30 np0005544708 nova_compute[241566]: 2025-12-03 21:32:30.969 241570 DEBUG nova.virt.libvirt.driver [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': True, 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'attachment_id': '4cd2ac45-ce27-450a-b9c6-58e7e1803ad9', 'device_type': 'disk', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-74f6cb4b-c1f6-4650-97bb-811b731c0960', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '74f6cb4b-c1f6-4650-97bb-811b731c0960', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'ab23bcbe-2091-4277-8f17-e9554b017c36', 'attached_at': '', 'detached_at': '', 'volume_id': '74f6cb4b-c1f6-4650-97bb-811b731c0960', 'serial': '74f6cb4b-c1f6-4650-97bb-811b731c0960'}, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  3 16:32:30 np0005544708 nova_compute[241566]: 2025-12-03 21:32:30.973 241570 WARNING nova.virt.libvirt.driver [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 16:32:30 np0005544708 nova_compute[241566]: 2025-12-03 21:32:30.978 241570 DEBUG nova.virt.libvirt.host [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  3 16:32:30 np0005544708 nova_compute[241566]: 2025-12-03 21:32:30.979 241570 DEBUG nova.virt.libvirt.host [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  3 16:32:30 np0005544708 nova_compute[241566]: 2025-12-03 21:32:30.982 241570 DEBUG nova.virt.libvirt.host [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  3 16:32:30 np0005544708 nova_compute[241566]: 2025-12-03 21:32:30.983 241570 DEBUG nova.virt.libvirt.host [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  3 16:32:30 np0005544708 nova_compute[241566]: 2025-12-03 21:32:30.984 241570 DEBUG nova.virt.libvirt.driver [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  3 16:32:30 np0005544708 nova_compute[241566]: 2025-12-03 21:32:30.984 241570 DEBUG nova.virt.hardware [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T21:30:47Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e4062fae-6b9c-487c-944b-c7d7fb777ccb',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  3 16:32:30 np0005544708 nova_compute[241566]: 2025-12-03 21:32:30.985 241570 DEBUG nova.virt.hardware [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  3 16:32:30 np0005544708 nova_compute[241566]: 2025-12-03 21:32:30.985 241570 DEBUG nova.virt.hardware [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  3 16:32:30 np0005544708 nova_compute[241566]: 2025-12-03 21:32:30.986 241570 DEBUG nova.virt.hardware [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  3 16:32:30 np0005544708 nova_compute[241566]: 2025-12-03 21:32:30.986 241570 DEBUG nova.virt.hardware [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  3 16:32:30 np0005544708 nova_compute[241566]: 2025-12-03 21:32:30.987 241570 DEBUG nova.virt.hardware [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  3 16:32:30 np0005544708 nova_compute[241566]: 2025-12-03 21:32:30.987 241570 DEBUG nova.virt.hardware [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  3 16:32:30 np0005544708 nova_compute[241566]: 2025-12-03 21:32:30.988 241570 DEBUG nova.virt.hardware [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  3 16:32:30 np0005544708 nova_compute[241566]: 2025-12-03 21:32:30.988 241570 DEBUG nova.virt.hardware [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  3 16:32:30 np0005544708 nova_compute[241566]: 2025-12-03 21:32:30.989 241570 DEBUG nova.virt.hardware [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  3 16:32:30 np0005544708 nova_compute[241566]: 2025-12-03 21:32:30.989 241570 DEBUG nova.virt.hardware [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  3 16:32:31 np0005544708 nova_compute[241566]: 2025-12-03 21:32:31.025 241570 DEBUG nova.storage.rbd_utils [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] rbd image ab23bcbe-2091-4277-8f17-e9554b017c36_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 16:32:31 np0005544708 nova_compute[241566]: 2025-12-03 21:32:31.030 241570 DEBUG nova.privsep.utils [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  3 16:32:31 np0005544708 nova_compute[241566]: 2025-12-03 21:32:31.031 241570 DEBUG oslo_concurrency.processutils [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:32:31 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:32:31 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Dec  3 16:32:31 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Dec  3 16:32:31 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Dec  3 16:32:31 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  3 16:32:31 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1291370575' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  3 16:32:31 np0005544708 nova_compute[241566]: 2025-12-03 21:32:31.602 241570 DEBUG oslo_concurrency.processutils [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:32:31 np0005544708 nova_compute[241566]: 2025-12-03 21:32:31.604 241570 DEBUG oslo_concurrency.lockutils [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:32:31 np0005544708 nova_compute[241566]: 2025-12-03 21:32:31.605 241570 DEBUG oslo_concurrency.lockutils [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:32:31 np0005544708 nova_compute[241566]: 2025-12-03 21:32:31.606 241570 DEBUG oslo_concurrency.lockutils [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:32:31 np0005544708 systemd[1]: Starting libvirt secret daemon...
Dec  3 16:32:31 np0005544708 systemd[1]: Started libvirt secret daemon.
Dec  3 16:32:31 np0005544708 nova_compute[241566]: 2025-12-03 21:32:31.682 241570 DEBUG nova.objects.instance [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Lazy-loading 'pci_devices' on Instance uuid ab23bcbe-2091-4277-8f17-e9554b017c36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 16:32:31 np0005544708 nova_compute[241566]: 2025-12-03 21:32:31.696 241570 DEBUG nova.virt.libvirt.driver [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] End _get_guest_xml xml=<domain type="kvm">
Dec  3 16:32:31 np0005544708 nova_compute[241566]:  <uuid>ab23bcbe-2091-4277-8f17-e9554b017c36</uuid>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:  <name>instance-00000001</name>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:  <memory>131072</memory>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:  <vcpu>1</vcpu>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:  <metadata>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  3 16:32:31 np0005544708 nova_compute[241566]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:      <nova:name>instance-depend-image</nova:name>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:      <nova:creationTime>2025-12-03 21:32:30</nova:creationTime>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:      <nova:flavor name="m1.nano">
Dec  3 16:32:31 np0005544708 nova_compute[241566]:        <nova:memory>128</nova:memory>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:        <nova:disk>1</nova:disk>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:        <nova:swap>0</nova:swap>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:        <nova:ephemeral>0</nova:ephemeral>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:        <nova:vcpus>1</nova:vcpus>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:      </nova:flavor>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:      <nova:owner>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:        <nova:user uuid="bc25c6732c60417d92846f1367ba9a4f">tempest-ImageDependencyTests-323442990-project-member</nova:user>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:        <nova:project uuid="11092597966341b0915e8c2a6530e568">tempest-ImageDependencyTests-323442990</nova:project>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:      </nova:owner>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:      <nova:ports/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    </nova:instance>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:  </metadata>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:  <sysinfo type="smbios">
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <system>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:      <entry name="manufacturer">RDO</entry>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:      <entry name="product">OpenStack Compute</entry>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:      <entry name="serial">ab23bcbe-2091-4277-8f17-e9554b017c36</entry>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:      <entry name="uuid">ab23bcbe-2091-4277-8f17-e9554b017c36</entry>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:      <entry name="family">Virtual Machine</entry>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    </system>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:  </sysinfo>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:  <os>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <boot dev="hd"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <smbios mode="sysinfo"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:  </os>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:  <features>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <acpi/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <apic/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <vmcoreinfo/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:  </features>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:  <clock offset="utc">
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <timer name="pit" tickpolicy="delay"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <timer name="hpet" present="no"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:  </clock>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:  <cpu mode="host-model" match="exact">
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <topology sockets="1" cores="1" threads="1"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:  </cpu>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:  <devices>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <disk type="network" device="cdrom">
Dec  3 16:32:31 np0005544708 nova_compute[241566]:      <driver type="raw" cache="none"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:      <source protocol="rbd" name="vms/ab23bcbe-2091-4277-8f17-e9554b017c36_disk.config">
Dec  3 16:32:31 np0005544708 nova_compute[241566]:        <host name="192.168.122.100" port="6789"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:      </source>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:      <auth username="openstack">
Dec  3 16:32:31 np0005544708 nova_compute[241566]:        <secret type="ceph" uuid="c21de27e-a7fd-594b-8324-0697ba9aab3a"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:      </auth>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:      <target dev="sda" bus="sata"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    </disk>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <disk type="network" device="disk">
Dec  3 16:32:31 np0005544708 nova_compute[241566]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:      <source protocol="rbd" name="volumes/volume-74f6cb4b-c1f6-4650-97bb-811b731c0960">
Dec  3 16:32:31 np0005544708 nova_compute[241566]:        <host name="192.168.122.100" port="6789"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:      </source>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:      <auth username="openstack">
Dec  3 16:32:31 np0005544708 nova_compute[241566]:        <secret type="ceph" uuid="c21de27e-a7fd-594b-8324-0697ba9aab3a"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:      </auth>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:      <target dev="vda" bus="virtio"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:      <serial>74f6cb4b-c1f6-4650-97bb-811b731c0960</serial>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    </disk>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <serial type="pty">
Dec  3 16:32:31 np0005544708 nova_compute[241566]:      <log file="/var/lib/nova/instances/ab23bcbe-2091-4277-8f17-e9554b017c36/console.log" append="off"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    </serial>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <video>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:      <model type="virtio"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    </video>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <input type="tablet" bus="usb"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <rng model="virtio">
Dec  3 16:32:31 np0005544708 nova_compute[241566]:      <backend model="random">/dev/urandom</backend>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    </rng>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <controller type="usb" index="0"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    <memballoon model="virtio">
Dec  3 16:32:31 np0005544708 nova_compute[241566]:      <stats period="10"/>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:    </memballoon>
Dec  3 16:32:31 np0005544708 nova_compute[241566]:  </devices>
Dec  3 16:32:31 np0005544708 nova_compute[241566]: </domain>
Dec  3 16:32:31 np0005544708 nova_compute[241566]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  3 16:32:31 np0005544708 nova_compute[241566]: 2025-12-03 21:32:31.748 241570 DEBUG nova.virt.libvirt.driver [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 16:32:31 np0005544708 nova_compute[241566]: 2025-12-03 21:32:31.749 241570 DEBUG nova.virt.libvirt.driver [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 16:32:31 np0005544708 nova_compute[241566]: 2025-12-03 21:32:31.749 241570 INFO nova.virt.libvirt.driver [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Using config drive#033[00m
Dec  3 16:32:31 np0005544708 nova_compute[241566]: 2025-12-03 21:32:31.780 241570 DEBUG nova.storage.rbd_utils [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] rbd image ab23bcbe-2091-4277-8f17-e9554b017c36_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 16:32:32 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v809: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.5 KiB/s wr, 18 op/s
Dec  3 16:32:32 np0005544708 nova_compute[241566]: 2025-12-03 21:32:32.854 241570 INFO nova.virt.libvirt.driver [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Creating config drive at /var/lib/nova/instances/ab23bcbe-2091-4277-8f17-e9554b017c36/disk.config#033[00m
Dec  3 16:32:32 np0005544708 nova_compute[241566]: 2025-12-03 21:32:32.860 241570 DEBUG oslo_concurrency.processutils [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ab23bcbe-2091-4277-8f17-e9554b017c36/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmoyd8gii execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:32:33 np0005544708 nova_compute[241566]: 2025-12-03 21:32:33.003 241570 DEBUG oslo_concurrency.processutils [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ab23bcbe-2091-4277-8f17-e9554b017c36/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmoyd8gii" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:32:33 np0005544708 nova_compute[241566]: 2025-12-03 21:32:33.029 241570 DEBUG nova.storage.rbd_utils [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] rbd image ab23bcbe-2091-4277-8f17-e9554b017c36_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 16:32:33 np0005544708 nova_compute[241566]: 2025-12-03 21:32:33.032 241570 DEBUG oslo_concurrency.processutils [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ab23bcbe-2091-4277-8f17-e9554b017c36/disk.config ab23bcbe-2091-4277-8f17-e9554b017c36_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:32:33 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Dec  3 16:32:33 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Dec  3 16:32:33 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Dec  3 16:32:34 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Dec  3 16:32:34 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v811: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:32:34 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Dec  3 16:32:34 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Dec  3 16:32:34 np0005544708 nova_compute[241566]: 2025-12-03 21:32:34.284 241570 DEBUG oslo_concurrency.processutils [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ab23bcbe-2091-4277-8f17-e9554b017c36/disk.config ab23bcbe-2091-4277-8f17-e9554b017c36_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.251s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:32:34 np0005544708 nova_compute[241566]: 2025-12-03 21:32:34.285 241570 INFO nova.virt.libvirt.driver [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Deleting local config drive /var/lib/nova/instances/ab23bcbe-2091-4277-8f17-e9554b017c36/disk.config because it was imported into RBD.#033[00m
Dec  3 16:32:34 np0005544708 systemd-machined[203931]: New machine qemu-1-instance-00000001.
Dec  3 16:32:34 np0005544708 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Dec  3 16:32:35 np0005544708 nova_compute[241566]: 2025-12-03 21:32:35.042 241570 DEBUG nova.virt.driver [None req-3e600e4f-6e74-4ea2-8332-f744551ec151 - - - - - -] Emitting event <LifecycleEvent: 1764797555.0419087, ab23bcbe-2091-4277-8f17-e9554b017c36 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 16:32:35 np0005544708 nova_compute[241566]: 2025-12-03 21:32:35.044 241570 INFO nova.compute.manager [None req-3e600e4f-6e74-4ea2-8332-f744551ec151 - - - - - -] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] VM Resumed (Lifecycle Event)#033[00m
Dec  3 16:32:35 np0005544708 nova_compute[241566]: 2025-12-03 21:32:35.050 241570 DEBUG nova.compute.manager [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  3 16:32:35 np0005544708 nova_compute[241566]: 2025-12-03 21:32:35.051 241570 DEBUG nova.virt.libvirt.driver [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  3 16:32:35 np0005544708 nova_compute[241566]: 2025-12-03 21:32:35.056 241570 INFO nova.virt.libvirt.driver [-] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Instance spawned successfully.#033[00m
Dec  3 16:32:35 np0005544708 nova_compute[241566]: 2025-12-03 21:32:35.057 241570 DEBUG nova.virt.libvirt.driver [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  3 16:32:35 np0005544708 nova_compute[241566]: 2025-12-03 21:32:35.105 241570 DEBUG nova.compute.manager [None req-3e600e4f-6e74-4ea2-8332-f744551ec151 - - - - - -] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 16:32:35 np0005544708 nova_compute[241566]: 2025-12-03 21:32:35.120 241570 DEBUG nova.compute.manager [None req-3e600e4f-6e74-4ea2-8332-f744551ec151 - - - - - -] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 16:32:35 np0005544708 nova_compute[241566]: 2025-12-03 21:32:35.124 241570 DEBUG nova.virt.libvirt.driver [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 16:32:35 np0005544708 nova_compute[241566]: 2025-12-03 21:32:35.125 241570 DEBUG nova.virt.libvirt.driver [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 16:32:35 np0005544708 nova_compute[241566]: 2025-12-03 21:32:35.126 241570 DEBUG nova.virt.libvirt.driver [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 16:32:35 np0005544708 nova_compute[241566]: 2025-12-03 21:32:35.127 241570 DEBUG nova.virt.libvirt.driver [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 16:32:35 np0005544708 nova_compute[241566]: 2025-12-03 21:32:35.128 241570 DEBUG nova.virt.libvirt.driver [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 16:32:35 np0005544708 nova_compute[241566]: 2025-12-03 21:32:35.129 241570 DEBUG nova.virt.libvirt.driver [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 16:32:35 np0005544708 nova_compute[241566]: 2025-12-03 21:32:35.163 241570 INFO nova.compute.manager [None req-3e600e4f-6e74-4ea2-8332-f744551ec151 - - - - - -] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 16:32:35 np0005544708 nova_compute[241566]: 2025-12-03 21:32:35.164 241570 DEBUG nova.virt.driver [None req-3e600e4f-6e74-4ea2-8332-f744551ec151 - - - - - -] Emitting event <LifecycleEvent: 1764797555.0435162, ab23bcbe-2091-4277-8f17-e9554b017c36 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 16:32:35 np0005544708 nova_compute[241566]: 2025-12-03 21:32:35.165 241570 INFO nova.compute.manager [None req-3e600e4f-6e74-4ea2-8332-f744551ec151 - - - - - -] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] VM Started (Lifecycle Event)#033[00m
Dec  3 16:32:35 np0005544708 nova_compute[241566]: 2025-12-03 21:32:35.192 241570 DEBUG nova.compute.manager [None req-3e600e4f-6e74-4ea2-8332-f744551ec151 - - - - - -] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 16:32:35 np0005544708 nova_compute[241566]: 2025-12-03 21:32:35.197 241570 DEBUG nova.compute.manager [None req-3e600e4f-6e74-4ea2-8332-f744551ec151 - - - - - -] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 16:32:35 np0005544708 nova_compute[241566]: 2025-12-03 21:32:35.205 241570 INFO nova.compute.manager [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Took 4.24 seconds to spawn the instance on the hypervisor.#033[00m
Dec  3 16:32:35 np0005544708 nova_compute[241566]: 2025-12-03 21:32:35.207 241570 DEBUG nova.compute.manager [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 16:32:35 np0005544708 nova_compute[241566]: 2025-12-03 21:32:35.223 241570 INFO nova.compute.manager [None req-3e600e4f-6e74-4ea2-8332-f744551ec151 - - - - - -] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 16:32:35 np0005544708 nova_compute[241566]: 2025-12-03 21:32:35.277 241570 INFO nova.compute.manager [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Took 7.81 seconds to build instance.#033[00m
Dec  3 16:32:35 np0005544708 nova_compute[241566]: 2025-12-03 21:32:35.299 241570 DEBUG oslo_concurrency.lockutils [None req-74cc14fe-fcfb-4dad-9e1d-36a690c180a3 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Lock "ab23bcbe-2091-4277-8f17-e9554b017c36" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.965s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:32:36 np0005544708 podman[246446]: 2025-12-03 21:32:36.139260459 +0000 UTC m=+0.068060735 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 16:32:36 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:32:36 np0005544708 podman[246445]: 2025-12-03 21:32:36.155879634 +0000 UTC m=+0.085066830 container health_status 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible)
Dec  3 16:32:36 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v813: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 25 KiB/s wr, 26 op/s
Dec  3 16:32:38 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v814: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 22 KiB/s wr, 22 op/s
Dec  3 16:32:38 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Dec  3 16:32:38 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Dec  3 16:32:38 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Dec  3 16:32:39 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Dec  3 16:32:39 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Dec  3 16:32:39 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Dec  3 16:32:40 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v817: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 25 KiB/s wr, 26 op/s
Dec  3 16:32:41 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:32:41 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Dec  3 16:32:41 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Dec  3 16:32:41 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Dec  3 16:32:42 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v819: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 3.7 KiB/s wr, 97 op/s
Dec  3 16:32:42 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Dec  3 16:32:42 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Dec  3 16:32:42 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Dec  3 16:32:43 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Dec  3 16:32:43 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Dec  3 16:32:43 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Dec  3 16:32:44 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v822: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 4.4 KiB/s wr, 117 op/s
Dec  3 16:32:45 np0005544708 nova_compute[241566]: 2025-12-03 21:32:45.783 241570 DEBUG oslo_concurrency.lockutils [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Acquiring lock "b947bb8b-dad6-41ce-9f54-836a10775855" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:32:45 np0005544708 nova_compute[241566]: 2025-12-03 21:32:45.784 241570 DEBUG oslo_concurrency.lockutils [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Lock "b947bb8b-dad6-41ce-9f54-836a10775855" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:32:45 np0005544708 nova_compute[241566]: 2025-12-03 21:32:45.804 241570 DEBUG nova.compute.manager [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  3 16:32:45 np0005544708 nova_compute[241566]: 2025-12-03 21:32:45.898 241570 DEBUG oslo_concurrency.lockutils [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:32:45 np0005544708 nova_compute[241566]: 2025-12-03 21:32:45.898 241570 DEBUG oslo_concurrency.lockutils [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:32:45 np0005544708 nova_compute[241566]: 2025-12-03 21:32:45.925 241570 DEBUG nova.virt.hardware [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  3 16:32:45 np0005544708 nova_compute[241566]: 2025-12-03 21:32:45.925 241570 INFO nova.compute.claims [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  3 16:32:46 np0005544708 nova_compute[241566]: 2025-12-03 21:32:46.114 241570 DEBUG oslo_concurrency.processutils [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:32:46 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:32:46 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v823: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 125 KiB/s rd, 8.0 KiB/s wr, 166 op/s
Dec  3 16:32:46 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  3 16:32:46 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1613450328' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  3 16:32:46 np0005544708 nova_compute[241566]: 2025-12-03 21:32:46.641 241570 DEBUG oslo_concurrency.processutils [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:32:46 np0005544708 nova_compute[241566]: 2025-12-03 21:32:46.650 241570 DEBUG nova.compute.provider_tree [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Inventory has not changed in ProviderTree for provider: 94aba67c-5c5e-45d0-83d1-33eb467c8775 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 16:32:46 np0005544708 nova_compute[241566]: 2025-12-03 21:32:46.672 241570 DEBUG nova.scheduler.client.report [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Inventory has not changed for provider 94aba67c-5c5e-45d0-83d1-33eb467c8775 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 16:32:46 np0005544708 nova_compute[241566]: 2025-12-03 21:32:46.703 241570 DEBUG oslo_concurrency.lockutils [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:32:46 np0005544708 nova_compute[241566]: 2025-12-03 21:32:46.704 241570 DEBUG nova.compute.manager [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  3 16:32:46 np0005544708 nova_compute[241566]: 2025-12-03 21:32:46.774 241570 DEBUG nova.compute.manager [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  3 16:32:46 np0005544708 nova_compute[241566]: 2025-12-03 21:32:46.775 241570 DEBUG nova.network.neutron [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  3 16:32:46 np0005544708 nova_compute[241566]: 2025-12-03 21:32:46.801 241570 INFO nova.virt.libvirt.driver [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  3 16:32:46 np0005544708 nova_compute[241566]: 2025-12-03 21:32:46.831 241570 DEBUG nova.compute.manager [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  3 16:32:46 np0005544708 nova_compute[241566]: 2025-12-03 21:32:46.935 241570 DEBUG nova.compute.manager [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  3 16:32:46 np0005544708 nova_compute[241566]: 2025-12-03 21:32:46.938 241570 DEBUG nova.virt.libvirt.driver [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  3 16:32:46 np0005544708 nova_compute[241566]: 2025-12-03 21:32:46.938 241570 INFO nova.virt.libvirt.driver [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Creating image(s)#033[00m
Dec  3 16:32:46 np0005544708 nova_compute[241566]: 2025-12-03 21:32:46.970 241570 DEBUG nova.storage.rbd_utils [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] rbd image b947bb8b-dad6-41ce-9f54-836a10775855_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 16:32:47 np0005544708 nova_compute[241566]: 2025-12-03 21:32:47.003 241570 DEBUG nova.storage.rbd_utils [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] rbd image b947bb8b-dad6-41ce-9f54-836a10775855_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 16:32:47 np0005544708 nova_compute[241566]: 2025-12-03 21:32:47.031 241570 DEBUG nova.storage.rbd_utils [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] rbd image b947bb8b-dad6-41ce-9f54-836a10775855_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 16:32:47 np0005544708 nova_compute[241566]: 2025-12-03 21:32:47.035 241570 DEBUG oslo_concurrency.lockutils [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Acquiring lock "14c656cff84150942006df12a6d997e516fe4350" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:32:47 np0005544708 nova_compute[241566]: 2025-12-03 21:32:47.036 241570 DEBUG oslo_concurrency.lockutils [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Lock "14c656cff84150942006df12a6d997e516fe4350" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:32:47 np0005544708 nova_compute[241566]: 2025-12-03 21:32:47.146 241570 DEBUG nova.network.neutron [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Dec  3 16:32:47 np0005544708 nova_compute[241566]: 2025-12-03 21:32:47.147 241570 DEBUG nova.compute.manager [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  3 16:32:47 np0005544708 nova_compute[241566]: 2025-12-03 21:32:47.235 241570 DEBUG nova.virt.libvirt.imagebackend [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Image locations are: [{'url': 'rbd://c21de27e-a7fd-594b-8324-0697ba9aab3a/images/d8f4b089-9930-48c5-890a-b63bbf40a7c4/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://c21de27e-a7fd-594b-8324-0697ba9aab3a/images/d8f4b089-9930-48c5-890a-b63bbf40a7c4/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Dec  3 16:32:47 np0005544708 nova_compute[241566]: 2025-12-03 21:32:47.307 241570 DEBUG nova.virt.libvirt.imagebackend [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Selected location: {'url': 'rbd://c21de27e-a7fd-594b-8324-0697ba9aab3a/images/d8f4b089-9930-48c5-890a-b63bbf40a7c4/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Dec  3 16:32:47 np0005544708 nova_compute[241566]: 2025-12-03 21:32:47.308 241570 DEBUG nova.storage.rbd_utils [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] cloning images/d8f4b089-9930-48c5-890a-b63bbf40a7c4@snap to None/b947bb8b-dad6-41ce-9f54-836a10775855_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Dec  3 16:32:47 np0005544708 nova_compute[241566]: 2025-12-03 21:32:47.430 241570 DEBUG oslo_concurrency.lockutils [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Lock "14c656cff84150942006df12a6d997e516fe4350" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.394s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:32:47 np0005544708 nova_compute[241566]: 2025-12-03 21:32:47.598 241570 DEBUG nova.storage.rbd_utils [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] resizing rbd image b947bb8b-dad6-41ce-9f54-836a10775855_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  3 16:32:47 np0005544708 nova_compute[241566]: 2025-12-03 21:32:47.677 241570 DEBUG nova.objects.instance [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Lazy-loading 'migration_context' on Instance uuid b947bb8b-dad6-41ce-9f54-836a10775855 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 16:32:47 np0005544708 nova_compute[241566]: 2025-12-03 21:32:47.696 241570 DEBUG nova.virt.libvirt.driver [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  3 16:32:47 np0005544708 nova_compute[241566]: 2025-12-03 21:32:47.696 241570 DEBUG nova.virt.libvirt.driver [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Ensure instance console log exists: /var/lib/nova/instances/b947bb8b-dad6-41ce-9f54-836a10775855/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  3 16:32:47 np0005544708 nova_compute[241566]: 2025-12-03 21:32:47.697 241570 DEBUG oslo_concurrency.lockutils [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:32:47 np0005544708 nova_compute[241566]: 2025-12-03 21:32:47.697 241570 DEBUG oslo_concurrency.lockutils [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:32:47 np0005544708 nova_compute[241566]: 2025-12-03 21:32:47.698 241570 DEBUG oslo_concurrency.lockutils [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:32:47 np0005544708 nova_compute[241566]: 2025-12-03 21:32:47.701 241570 DEBUG nova.virt.libvirt.driver [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='fa160df59a096a4d68dea663d126b5de',container_format='bare',created_at=2025-12-03T21:32:42Z,direct_url=<?>,disk_format='raw',id=d8f4b089-9930-48c5-890a-b63bbf40a7c4,min_disk=0,min_ram=0,name='tempest-image-dependency-test-1153271457',owner='11092597966341b0915e8c2a6530e568',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-12-03T21:32:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'disk_bus': 'virtio', 'size': 0, 'device_name': '/dev/vda', 'boot_index': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': 'd8f4b089-9930-48c5-890a-b63bbf40a7c4'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  3 16:32:47 np0005544708 nova_compute[241566]: 2025-12-03 21:32:47.706 241570 WARNING nova.virt.libvirt.driver [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 16:32:47 np0005544708 nova_compute[241566]: 2025-12-03 21:32:47.711 241570 DEBUG nova.virt.libvirt.host [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  3 16:32:47 np0005544708 nova_compute[241566]: 2025-12-03 21:32:47.712 241570 DEBUG nova.virt.libvirt.host [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  3 16:32:47 np0005544708 nova_compute[241566]: 2025-12-03 21:32:47.716 241570 DEBUG nova.virt.libvirt.host [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  3 16:32:47 np0005544708 nova_compute[241566]: 2025-12-03 21:32:47.717 241570 DEBUG nova.virt.libvirt.host [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  3 16:32:47 np0005544708 nova_compute[241566]: 2025-12-03 21:32:47.717 241570 DEBUG nova.virt.libvirt.driver [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  3 16:32:47 np0005544708 nova_compute[241566]: 2025-12-03 21:32:47.718 241570 DEBUG nova.virt.hardware [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-03T21:30:47Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e4062fae-6b9c-487c-944b-c7d7fb777ccb',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='fa160df59a096a4d68dea663d126b5de',container_format='bare',created_at=2025-12-03T21:32:42Z,direct_url=<?>,disk_format='raw',id=d8f4b089-9930-48c5-890a-b63bbf40a7c4,min_disk=0,min_ram=0,name='tempest-image-dependency-test-1153271457',owner='11092597966341b0915e8c2a6530e568',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-12-03T21:32:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  3 16:32:47 np0005544708 nova_compute[241566]: 2025-12-03 21:32:47.718 241570 DEBUG nova.virt.hardware [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  3 16:32:47 np0005544708 nova_compute[241566]: 2025-12-03 21:32:47.718 241570 DEBUG nova.virt.hardware [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  3 16:32:47 np0005544708 nova_compute[241566]: 2025-12-03 21:32:47.719 241570 DEBUG nova.virt.hardware [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  3 16:32:47 np0005544708 nova_compute[241566]: 2025-12-03 21:32:47.719 241570 DEBUG nova.virt.hardware [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  3 16:32:47 np0005544708 nova_compute[241566]: 2025-12-03 21:32:47.719 241570 DEBUG nova.virt.hardware [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  3 16:32:47 np0005544708 nova_compute[241566]: 2025-12-03 21:32:47.719 241570 DEBUG nova.virt.hardware [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  3 16:32:47 np0005544708 nova_compute[241566]: 2025-12-03 21:32:47.720 241570 DEBUG nova.virt.hardware [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  3 16:32:47 np0005544708 nova_compute[241566]: 2025-12-03 21:32:47.720 241570 DEBUG nova.virt.hardware [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  3 16:32:47 np0005544708 nova_compute[241566]: 2025-12-03 21:32:47.720 241570 DEBUG nova.virt.hardware [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  3 16:32:47 np0005544708 nova_compute[241566]: 2025-12-03 21:32:47.721 241570 DEBUG nova.virt.hardware [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  3 16:32:47 np0005544708 nova_compute[241566]: 2025-12-03 21:32:47.724 241570 DEBUG oslo_concurrency.processutils [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:32:48 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v824: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 3.7 KiB/s wr, 59 op/s
Dec  3 16:32:48 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  3 16:32:48 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2672130748' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  3 16:32:48 np0005544708 nova_compute[241566]: 2025-12-03 21:32:48.269 241570 DEBUG oslo_concurrency.processutils [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:32:48 np0005544708 nova_compute[241566]: 2025-12-03 21:32:48.295 241570 DEBUG nova.storage.rbd_utils [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] rbd image b947bb8b-dad6-41ce-9f54-836a10775855_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 16:32:48 np0005544708 nova_compute[241566]: 2025-12-03 21:32:48.300 241570 DEBUG oslo_concurrency.processutils [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:32:48 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  3 16:32:48 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/826435445' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec  3 16:32:48 np0005544708 nova_compute[241566]: 2025-12-03 21:32:48.842 241570 DEBUG oslo_concurrency.processutils [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:32:48 np0005544708 nova_compute[241566]: 2025-12-03 21:32:48.844 241570 DEBUG nova.objects.instance [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Lazy-loading 'pci_devices' on Instance uuid b947bb8b-dad6-41ce-9f54-836a10775855 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 16:32:48 np0005544708 nova_compute[241566]: 2025-12-03 21:32:48.858 241570 DEBUG nova.virt.libvirt.driver [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] End _get_guest_xml xml=<domain type="kvm">
Dec  3 16:32:48 np0005544708 nova_compute[241566]:  <uuid>b947bb8b-dad6-41ce-9f54-836a10775855</uuid>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:  <name>instance-00000002</name>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:  <memory>131072</memory>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:  <vcpu>1</vcpu>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:  <metadata>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  3 16:32:48 np0005544708 nova_compute[241566]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:      <nova:name>instance-depend-image</nova:name>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:      <nova:creationTime>2025-12-03 21:32:47</nova:creationTime>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:      <nova:flavor name="m1.nano">
Dec  3 16:32:48 np0005544708 nova_compute[241566]:        <nova:memory>128</nova:memory>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:        <nova:disk>1</nova:disk>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:        <nova:swap>0</nova:swap>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:        <nova:ephemeral>0</nova:ephemeral>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:        <nova:vcpus>1</nova:vcpus>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:      </nova:flavor>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:      <nova:owner>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:        <nova:user uuid="bc25c6732c60417d92846f1367ba9a4f">tempest-ImageDependencyTests-323442990-project-member</nova:user>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:        <nova:project uuid="11092597966341b0915e8c2a6530e568">tempest-ImageDependencyTests-323442990</nova:project>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:      </nova:owner>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:      <nova:root type="image" uuid="d8f4b089-9930-48c5-890a-b63bbf40a7c4"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:      <nova:ports/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    </nova:instance>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:  </metadata>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:  <sysinfo type="smbios">
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <system>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:      <entry name="manufacturer">RDO</entry>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:      <entry name="product">OpenStack Compute</entry>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:      <entry name="serial">b947bb8b-dad6-41ce-9f54-836a10775855</entry>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:      <entry name="uuid">b947bb8b-dad6-41ce-9f54-836a10775855</entry>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:      <entry name="family">Virtual Machine</entry>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    </system>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:  </sysinfo>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:  <os>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <boot dev="hd"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <smbios mode="sysinfo"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:  </os>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:  <features>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <acpi/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <apic/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <vmcoreinfo/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:  </features>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:  <clock offset="utc">
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <timer name="pit" tickpolicy="delay"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <timer name="hpet" present="no"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:  </clock>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:  <cpu mode="host-model" match="exact">
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <topology sockets="1" cores="1" threads="1"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:  </cpu>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:  <devices>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <disk type="network" device="disk">
Dec  3 16:32:48 np0005544708 nova_compute[241566]:      <driver type="raw" cache="none"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:      <source protocol="rbd" name="vms/b947bb8b-dad6-41ce-9f54-836a10775855_disk">
Dec  3 16:32:48 np0005544708 nova_compute[241566]:        <host name="192.168.122.100" port="6789"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:      </source>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:      <auth username="openstack">
Dec  3 16:32:48 np0005544708 nova_compute[241566]:        <secret type="ceph" uuid="c21de27e-a7fd-594b-8324-0697ba9aab3a"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:      </auth>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:      <target dev="vda" bus="virtio"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    </disk>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <disk type="network" device="cdrom">
Dec  3 16:32:48 np0005544708 nova_compute[241566]:      <driver type="raw" cache="none"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:      <source protocol="rbd" name="vms/b947bb8b-dad6-41ce-9f54-836a10775855_disk.config">
Dec  3 16:32:48 np0005544708 nova_compute[241566]:        <host name="192.168.122.100" port="6789"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:      </source>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:      <auth username="openstack">
Dec  3 16:32:48 np0005544708 nova_compute[241566]:        <secret type="ceph" uuid="c21de27e-a7fd-594b-8324-0697ba9aab3a"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:      </auth>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:      <target dev="sda" bus="sata"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    </disk>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <serial type="pty">
Dec  3 16:32:48 np0005544708 nova_compute[241566]:      <log file="/var/lib/nova/instances/b947bb8b-dad6-41ce-9f54-836a10775855/console.log" append="off"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    </serial>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <video>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:      <model type="virtio"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    </video>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <input type="tablet" bus="usb"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <rng model="virtio">
Dec  3 16:32:48 np0005544708 nova_compute[241566]:      <backend model="random">/dev/urandom</backend>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    </rng>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <controller type="pci" model="pcie-root-port"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <controller type="usb" index="0"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    <memballoon model="virtio">
Dec  3 16:32:48 np0005544708 nova_compute[241566]:      <stats period="10"/>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:    </memballoon>
Dec  3 16:32:48 np0005544708 nova_compute[241566]:  </devices>
Dec  3 16:32:48 np0005544708 nova_compute[241566]: </domain>
Dec  3 16:32:48 np0005544708 nova_compute[241566]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  3 16:32:48 np0005544708 nova_compute[241566]: 2025-12-03 21:32:48.915 241570 DEBUG nova.virt.libvirt.driver [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 16:32:48 np0005544708 nova_compute[241566]: 2025-12-03 21:32:48.916 241570 DEBUG nova.virt.libvirt.driver [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  3 16:32:48 np0005544708 nova_compute[241566]: 2025-12-03 21:32:48.916 241570 INFO nova.virt.libvirt.driver [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Using config drive#033[00m
Dec  3 16:32:48 np0005544708 nova_compute[241566]: 2025-12-03 21:32:48.936 241570 DEBUG nova.storage.rbd_utils [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] rbd image b947bb8b-dad6-41ce-9f54-836a10775855_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 16:32:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:32:48.937 151937 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:32:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:32:48.937 151937 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:32:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:32:48.937 151937 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:32:49 np0005544708 nova_compute[241566]: 2025-12-03 21:32:49.125 241570 INFO nova.virt.libvirt.driver [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Creating config drive at /var/lib/nova/instances/b947bb8b-dad6-41ce-9f54-836a10775855/disk.config#033[00m
Dec  3 16:32:49 np0005544708 nova_compute[241566]: 2025-12-03 21:32:49.129 241570 DEBUG oslo_concurrency.processutils [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b947bb8b-dad6-41ce-9f54-836a10775855/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu6nkyul8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:32:49 np0005544708 nova_compute[241566]: 2025-12-03 21:32:49.262 241570 DEBUG oslo_concurrency.processutils [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b947bb8b-dad6-41ce-9f54-836a10775855/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu6nkyul8" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:32:49 np0005544708 nova_compute[241566]: 2025-12-03 21:32:49.301 241570 DEBUG nova.storage.rbd_utils [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] rbd image b947bb8b-dad6-41ce-9f54-836a10775855_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  3 16:32:49 np0005544708 nova_compute[241566]: 2025-12-03 21:32:49.306 241570 DEBUG oslo_concurrency.processutils [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b947bb8b-dad6-41ce-9f54-836a10775855/disk.config b947bb8b-dad6-41ce-9f54-836a10775855_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:32:49 np0005544708 nova_compute[241566]: 2025-12-03 21:32:49.626 241570 DEBUG oslo_concurrency.processutils [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b947bb8b-dad6-41ce-9f54-836a10775855/disk.config b947bb8b-dad6-41ce-9f54-836a10775855_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.320s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:32:49 np0005544708 nova_compute[241566]: 2025-12-03 21:32:49.628 241570 INFO nova.virt.libvirt.driver [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Deleting local config drive /var/lib/nova/instances/b947bb8b-dad6-41ce-9f54-836a10775855/disk.config because it was imported into RBD.#033[00m
Dec  3 16:32:49 np0005544708 systemd-machined[203931]: New machine qemu-2-instance-00000002.
Dec  3 16:32:49 np0005544708 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Dec  3 16:32:50 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v825: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 3.2 KiB/s wr, 52 op/s
Dec  3 16:32:50 np0005544708 nova_compute[241566]: 2025-12-03 21:32:50.279 241570 DEBUG nova.virt.driver [None req-3e600e4f-6e74-4ea2-8332-f744551ec151 - - - - - -] Emitting event <LifecycleEvent: 1764797570.2792146, b947bb8b-dad6-41ce-9f54-836a10775855 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 16:32:50 np0005544708 nova_compute[241566]: 2025-12-03 21:32:50.281 241570 INFO nova.compute.manager [None req-3e600e4f-6e74-4ea2-8332-f744551ec151 - - - - - -] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] VM Resumed (Lifecycle Event)#033[00m
Dec  3 16:32:50 np0005544708 nova_compute[241566]: 2025-12-03 21:32:50.287 241570 DEBUG nova.compute.manager [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  3 16:32:50 np0005544708 nova_compute[241566]: 2025-12-03 21:32:50.287 241570 DEBUG nova.virt.libvirt.driver [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  3 16:32:50 np0005544708 nova_compute[241566]: 2025-12-03 21:32:50.291 241570 INFO nova.virt.libvirt.driver [-] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Instance spawned successfully.#033[00m
Dec  3 16:32:50 np0005544708 nova_compute[241566]: 2025-12-03 21:32:50.291 241570 DEBUG nova.virt.libvirt.driver [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  3 16:32:50 np0005544708 nova_compute[241566]: 2025-12-03 21:32:50.305 241570 DEBUG nova.compute.manager [None req-3e600e4f-6e74-4ea2-8332-f744551ec151 - - - - - -] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 16:32:50 np0005544708 nova_compute[241566]: 2025-12-03 21:32:50.311 241570 DEBUG nova.compute.manager [None req-3e600e4f-6e74-4ea2-8332-f744551ec151 - - - - - -] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 16:32:50 np0005544708 nova_compute[241566]: 2025-12-03 21:32:50.313 241570 DEBUG nova.virt.libvirt.driver [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 16:32:50 np0005544708 nova_compute[241566]: 2025-12-03 21:32:50.314 241570 DEBUG nova.virt.libvirt.driver [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 16:32:50 np0005544708 nova_compute[241566]: 2025-12-03 21:32:50.314 241570 DEBUG nova.virt.libvirt.driver [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 16:32:50 np0005544708 nova_compute[241566]: 2025-12-03 21:32:50.315 241570 DEBUG nova.virt.libvirt.driver [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 16:32:50 np0005544708 nova_compute[241566]: 2025-12-03 21:32:50.315 241570 DEBUG nova.virt.libvirt.driver [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 16:32:50 np0005544708 nova_compute[241566]: 2025-12-03 21:32:50.315 241570 DEBUG nova.virt.libvirt.driver [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  3 16:32:50 np0005544708 nova_compute[241566]: 2025-12-03 21:32:50.339 241570 INFO nova.compute.manager [None req-3e600e4f-6e74-4ea2-8332-f744551ec151 - - - - - -] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 16:32:50 np0005544708 nova_compute[241566]: 2025-12-03 21:32:50.340 241570 DEBUG nova.virt.driver [None req-3e600e4f-6e74-4ea2-8332-f744551ec151 - - - - - -] Emitting event <LifecycleEvent: 1764797570.2802093, b947bb8b-dad6-41ce-9f54-836a10775855 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 16:32:50 np0005544708 nova_compute[241566]: 2025-12-03 21:32:50.340 241570 INFO nova.compute.manager [None req-3e600e4f-6e74-4ea2-8332-f744551ec151 - - - - - -] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] VM Started (Lifecycle Event)#033[00m
Dec  3 16:32:50 np0005544708 nova_compute[241566]: 2025-12-03 21:32:50.367 241570 DEBUG nova.compute.manager [None req-3e600e4f-6e74-4ea2-8332-f744551ec151 - - - - - -] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 16:32:50 np0005544708 nova_compute[241566]: 2025-12-03 21:32:50.369 241570 DEBUG nova.compute.manager [None req-3e600e4f-6e74-4ea2-8332-f744551ec151 - - - - - -] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  3 16:32:50 np0005544708 nova_compute[241566]: 2025-12-03 21:32:50.376 241570 INFO nova.compute.manager [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Took 3.44 seconds to spawn the instance on the hypervisor.#033[00m
Dec  3 16:32:50 np0005544708 nova_compute[241566]: 2025-12-03 21:32:50.377 241570 DEBUG nova.compute.manager [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 16:32:50 np0005544708 nova_compute[241566]: 2025-12-03 21:32:50.399 241570 INFO nova.compute.manager [None req-3e600e4f-6e74-4ea2-8332-f744551ec151 - - - - - -] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  3 16:32:50 np0005544708 nova_compute[241566]: 2025-12-03 21:32:50.434 241570 INFO nova.compute.manager [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Took 4.58 seconds to build instance.#033[00m
Dec  3 16:32:50 np0005544708 nova_compute[241566]: 2025-12-03 21:32:50.461 241570 DEBUG oslo_concurrency.lockutils [None req-768b85f9-96b4-4f08-9f0a-22e2e26b4191 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Lock "b947bb8b-dad6-41ce-9f54-836a10775855" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:32:51 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:32:51 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Dec  3 16:32:51 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Dec  3 16:32:51 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Dec  3 16:32:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:32:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:32:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:32:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:32:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:32:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:32:52 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v827: 177 pgs: 177 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 21 KiB/s wr, 110 op/s
Dec  3 16:32:52 np0005544708 nova_compute[241566]: 2025-12-03 21:32:52.682 241570 DEBUG nova.compute.manager [None req-2fb1d622-ab36-4bf2-96ca-793271147ac1 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 16:32:52 np0005544708 nova_compute[241566]: 2025-12-03 21:32:52.733 241570 INFO nova.compute.manager [None req-2fb1d622-ab36-4bf2-96ca-793271147ac1 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] instance snapshotting#033[00m
Dec  3 16:32:52 np0005544708 nova_compute[241566]: 2025-12-03 21:32:52.967 241570 INFO nova.virt.libvirt.driver [None req-2fb1d622-ab36-4bf2-96ca-793271147ac1 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Beginning live snapshot process#033[00m
Dec  3 16:32:53 np0005544708 nova_compute[241566]: 2025-12-03 21:32:53.148 241570 DEBUG nova.storage.rbd_utils [None req-2fb1d622-ab36-4bf2-96ca-793271147ac1 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] creating snapshot(bb28ec91a50a49e38214c6d118aff05d) on rbd image(b947bb8b-dad6-41ce-9f54-836a10775855_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec  3 16:32:53 np0005544708 podman[246879]: 2025-12-03 21:32:53.153281002 +0000 UTC m=+0.099072742 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  3 16:32:53 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Dec  3 16:32:53 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Dec  3 16:32:53 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Dec  3 16:32:53 np0005544708 nova_compute[241566]: 2025-12-03 21:32:53.623 241570 DEBUG nova.storage.rbd_utils [None req-2fb1d622-ab36-4bf2-96ca-793271147ac1 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] cloning vms/b947bb8b-dad6-41ce-9f54-836a10775855_disk@bb28ec91a50a49e38214c6d118aff05d to images/af22589e-230e-4308-9c23-43fa3e67646b clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Dec  3 16:32:53 np0005544708 nova_compute[241566]: 2025-12-03 21:32:53.730 241570 DEBUG nova.storage.rbd_utils [None req-2fb1d622-ab36-4bf2-96ca-793271147ac1 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] flattening images/af22589e-230e-4308-9c23-43fa3e67646b flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Dec  3 16:32:53 np0005544708 nova_compute[241566]: 2025-12-03 21:32:53.856 241570 DEBUG nova.storage.rbd_utils [None req-2fb1d622-ab36-4bf2-96ca-793271147ac1 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] removing snapshot(bb28ec91a50a49e38214c6d118aff05d) on rbd image(b947bb8b-dad6-41ce-9f54-836a10775855_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Dec  3 16:32:54 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v829: 177 pgs: 177 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 20 KiB/s wr, 70 op/s
Dec  3 16:32:54 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Dec  3 16:32:54 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Dec  3 16:32:54 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Dec  3 16:32:54 np0005544708 nova_compute[241566]: 2025-12-03 21:32:54.597 241570 DEBUG nova.storage.rbd_utils [None req-2fb1d622-ab36-4bf2-96ca-793271147ac1 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] creating snapshot(snap) on rbd image(af22589e-230e-4308-9c23-43fa3e67646b) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec  3 16:32:55 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Dec  3 16:32:55 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Dec  3 16:32:55 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Dec  3 16:32:56 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:32:56 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v832: 177 pgs: 177 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 5.8 KiB/s wr, 131 op/s
Dec  3 16:32:56 np0005544708 nova_compute[241566]: 2025-12-03 21:32:56.966 241570 INFO nova.virt.libvirt.driver [None req-2fb1d622-ab36-4bf2-96ca-793271147ac1 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Snapshot image upload complete#033[00m
Dec  3 16:32:56 np0005544708 nova_compute[241566]: 2025-12-03 21:32:56.967 241570 INFO nova.compute.manager [None req-2fb1d622-ab36-4bf2-96ca-793271147ac1 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Took 4.23 seconds to snapshot the instance on the hypervisor.#033[00m
Dec  3 16:32:58 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v833: 177 pgs: 177 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 4.8 KiB/s wr, 109 op/s
Dec  3 16:32:58 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Dec  3 16:32:58 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Dec  3 16:32:58 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Dec  3 16:32:59 np0005544708 nova_compute[241566]: 2025-12-03 21:32:59.348 241570 DEBUG oslo_concurrency.lockutils [None req-06daedfd-ae09-471c-98d4-00f7eb2090ac bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Acquiring lock "b947bb8b-dad6-41ce-9f54-836a10775855" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:32:59 np0005544708 nova_compute[241566]: 2025-12-03 21:32:59.349 241570 DEBUG oslo_concurrency.lockutils [None req-06daedfd-ae09-471c-98d4-00f7eb2090ac bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Lock "b947bb8b-dad6-41ce-9f54-836a10775855" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:32:59 np0005544708 nova_compute[241566]: 2025-12-03 21:32:59.349 241570 DEBUG oslo_concurrency.lockutils [None req-06daedfd-ae09-471c-98d4-00f7eb2090ac bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Acquiring lock "b947bb8b-dad6-41ce-9f54-836a10775855-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:32:59 np0005544708 nova_compute[241566]: 2025-12-03 21:32:59.349 241570 DEBUG oslo_concurrency.lockutils [None req-06daedfd-ae09-471c-98d4-00f7eb2090ac bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Lock "b947bb8b-dad6-41ce-9f54-836a10775855-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:32:59 np0005544708 nova_compute[241566]: 2025-12-03 21:32:59.349 241570 DEBUG oslo_concurrency.lockutils [None req-06daedfd-ae09-471c-98d4-00f7eb2090ac bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Lock "b947bb8b-dad6-41ce-9f54-836a10775855-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:32:59 np0005544708 nova_compute[241566]: 2025-12-03 21:32:59.350 241570 INFO nova.compute.manager [None req-06daedfd-ae09-471c-98d4-00f7eb2090ac bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Terminating instance#033[00m
Dec  3 16:32:59 np0005544708 nova_compute[241566]: 2025-12-03 21:32:59.351 241570 DEBUG oslo_concurrency.lockutils [None req-06daedfd-ae09-471c-98d4-00f7eb2090ac bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Acquiring lock "refresh_cache-b947bb8b-dad6-41ce-9f54-836a10775855" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 16:32:59 np0005544708 nova_compute[241566]: 2025-12-03 21:32:59.351 241570 DEBUG oslo_concurrency.lockutils [None req-06daedfd-ae09-471c-98d4-00f7eb2090ac bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Acquired lock "refresh_cache-b947bb8b-dad6-41ce-9f54-836a10775855" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 16:32:59 np0005544708 nova_compute[241566]: 2025-12-03 21:32:59.351 241570 DEBUG nova.network.neutron [None req-06daedfd-ae09-471c-98d4-00f7eb2090ac bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  3 16:32:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  3 16:32:59 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3117375342' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec  3 16:32:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  3 16:32:59 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3117375342' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec  3 16:32:59 np0005544708 nova_compute[241566]: 2025-12-03 21:32:59.868 241570 DEBUG nova.network.neutron [None req-06daedfd-ae09-471c-98d4-00f7eb2090ac bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  3 16:33:00 np0005544708 nova_compute[241566]: 2025-12-03 21:33:00.138 241570 DEBUG nova.network.neutron [None req-06daedfd-ae09-471c-98d4-00f7eb2090ac bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 16:33:00 np0005544708 nova_compute[241566]: 2025-12-03 21:33:00.160 241570 DEBUG oslo_concurrency.lockutils [None req-06daedfd-ae09-471c-98d4-00f7eb2090ac bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Releasing lock "refresh_cache-b947bb8b-dad6-41ce-9f54-836a10775855" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 16:33:00 np0005544708 nova_compute[241566]: 2025-12-03 21:33:00.161 241570 DEBUG nova.compute.manager [None req-06daedfd-ae09-471c-98d4-00f7eb2090ac bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  3 16:33:00 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v835: 177 pgs: 177 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 4.8 KiB/s wr, 109 op/s
Dec  3 16:33:00 np0005544708 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Dec  3 16:33:00 np0005544708 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 1.136s CPU time.
Dec  3 16:33:00 np0005544708 systemd-machined[203931]: Machine qemu-2-instance-00000002 terminated.
Dec  3 16:33:00 np0005544708 nova_compute[241566]: 2025-12-03 21:33:00.390 241570 INFO nova.virt.libvirt.driver [-] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Instance destroyed successfully.#033[00m
Dec  3 16:33:00 np0005544708 nova_compute[241566]: 2025-12-03 21:33:00.391 241570 DEBUG nova.objects.instance [None req-06daedfd-ae09-471c-98d4-00f7eb2090ac bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Lazy-loading 'resources' on Instance uuid b947bb8b-dad6-41ce-9f54-836a10775855 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 16:33:00 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Dec  3 16:33:00 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Dec  3 16:33:00 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Dec  3 16:33:00 np0005544708 nova_compute[241566]: 2025-12-03 21:33:00.847 241570 INFO nova.virt.libvirt.driver [None req-06daedfd-ae09-471c-98d4-00f7eb2090ac bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Deleting instance files /var/lib/nova/instances/b947bb8b-dad6-41ce-9f54-836a10775855_del#033[00m
Dec  3 16:33:00 np0005544708 nova_compute[241566]: 2025-12-03 21:33:00.848 241570 INFO nova.virt.libvirt.driver [None req-06daedfd-ae09-471c-98d4-00f7eb2090ac bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Deletion of /var/lib/nova/instances/b947bb8b-dad6-41ce-9f54-836a10775855_del complete#033[00m
Dec  3 16:33:00 np0005544708 nova_compute[241566]: 2025-12-03 21:33:00.894 241570 DEBUG nova.virt.libvirt.host [None req-06daedfd-ae09-471c-98d4-00f7eb2090ac bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Dec  3 16:33:00 np0005544708 nova_compute[241566]: 2025-12-03 21:33:00.895 241570 INFO nova.virt.libvirt.host [None req-06daedfd-ae09-471c-98d4-00f7eb2090ac bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] UEFI support detected#033[00m
Dec  3 16:33:00 np0005544708 nova_compute[241566]: 2025-12-03 21:33:00.897 241570 INFO nova.compute.manager [None req-06daedfd-ae09-471c-98d4-00f7eb2090ac bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Dec  3 16:33:00 np0005544708 nova_compute[241566]: 2025-12-03 21:33:00.897 241570 DEBUG oslo.service.loopingcall [None req-06daedfd-ae09-471c-98d4-00f7eb2090ac bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  3 16:33:00 np0005544708 nova_compute[241566]: 2025-12-03 21:33:00.898 241570 DEBUG nova.compute.manager [-] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  3 16:33:00 np0005544708 nova_compute[241566]: 2025-12-03 21:33:00.898 241570 DEBUG nova.network.neutron [-] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  3 16:33:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:33:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Dec  3 16:33:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Dec  3 16:33:01 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Dec  3 16:33:01 np0005544708 nova_compute[241566]: 2025-12-03 21:33:01.855 241570 DEBUG nova.network.neutron [-] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  3 16:33:01 np0005544708 nova_compute[241566]: 2025-12-03 21:33:01.883 241570 DEBUG nova.network.neutron [-] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 16:33:01 np0005544708 nova_compute[241566]: 2025-12-03 21:33:01.905 241570 INFO nova.compute.manager [-] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Took 1.01 seconds to deallocate network for instance.#033[00m
Dec  3 16:33:01 np0005544708 nova_compute[241566]: 2025-12-03 21:33:01.961 241570 DEBUG oslo_concurrency.lockutils [None req-06daedfd-ae09-471c-98d4-00f7eb2090ac bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:33:01 np0005544708 nova_compute[241566]: 2025-12-03 21:33:01.962 241570 DEBUG oslo_concurrency.lockutils [None req-06daedfd-ae09-471c-98d4-00f7eb2090ac bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:33:02 np0005544708 nova_compute[241566]: 2025-12-03 21:33:02.066 241570 DEBUG oslo_concurrency.processutils [None req-06daedfd-ae09-471c-98d4-00f7eb2090ac bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:33:02 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v838: 177 pgs: 177 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 4.7 KiB/s wr, 134 op/s
Dec  3 16:33:02 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  3 16:33:02 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/607795396' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  3 16:33:02 np0005544708 nova_compute[241566]: 2025-12-03 21:33:02.640 241570 DEBUG oslo_concurrency.processutils [None req-06daedfd-ae09-471c-98d4-00f7eb2090ac bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:33:02 np0005544708 nova_compute[241566]: 2025-12-03 21:33:02.645 241570 DEBUG nova.compute.provider_tree [None req-06daedfd-ae09-471c-98d4-00f7eb2090ac bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Inventory has not changed in ProviderTree for provider: 94aba67c-5c5e-45d0-83d1-33eb467c8775 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 16:33:02 np0005544708 nova_compute[241566]: 2025-12-03 21:33:02.663 241570 DEBUG nova.scheduler.client.report [None req-06daedfd-ae09-471c-98d4-00f7eb2090ac bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Inventory has not changed for provider 94aba67c-5c5e-45d0-83d1-33eb467c8775 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 16:33:02 np0005544708 nova_compute[241566]: 2025-12-03 21:33:02.693 241570 DEBUG oslo_concurrency.lockutils [None req-06daedfd-ae09-471c-98d4-00f7eb2090ac bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:33:02 np0005544708 nova_compute[241566]: 2025-12-03 21:33:02.719 241570 INFO nova.scheduler.client.report [None req-06daedfd-ae09-471c-98d4-00f7eb2090ac bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Deleted allocations for instance b947bb8b-dad6-41ce-9f54-836a10775855#033[00m
Dec  3 16:33:02 np0005544708 nova_compute[241566]: 2025-12-03 21:33:02.782 241570 DEBUG oslo_concurrency.lockutils [None req-06daedfd-ae09-471c-98d4-00f7eb2090ac bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Lock "b947bb8b-dad6-41ce-9f54-836a10775855" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.434s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:33:03 np0005544708 nova_compute[241566]: 2025-12-03 21:33:03.257 241570 DEBUG oslo_concurrency.lockutils [None req-974e43e7-7aec-4d08-9363-359f6d9daf79 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Acquiring lock "ab23bcbe-2091-4277-8f17-e9554b017c36" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:33:03 np0005544708 nova_compute[241566]: 2025-12-03 21:33:03.257 241570 DEBUG oslo_concurrency.lockutils [None req-974e43e7-7aec-4d08-9363-359f6d9daf79 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Lock "ab23bcbe-2091-4277-8f17-e9554b017c36" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:33:03 np0005544708 nova_compute[241566]: 2025-12-03 21:33:03.258 241570 DEBUG oslo_concurrency.lockutils [None req-974e43e7-7aec-4d08-9363-359f6d9daf79 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Acquiring lock "ab23bcbe-2091-4277-8f17-e9554b017c36-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:33:03 np0005544708 nova_compute[241566]: 2025-12-03 21:33:03.258 241570 DEBUG oslo_concurrency.lockutils [None req-974e43e7-7aec-4d08-9363-359f6d9daf79 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Lock "ab23bcbe-2091-4277-8f17-e9554b017c36-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:33:03 np0005544708 nova_compute[241566]: 2025-12-03 21:33:03.258 241570 DEBUG oslo_concurrency.lockutils [None req-974e43e7-7aec-4d08-9363-359f6d9daf79 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Lock "ab23bcbe-2091-4277-8f17-e9554b017c36-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:33:03 np0005544708 nova_compute[241566]: 2025-12-03 21:33:03.259 241570 INFO nova.compute.manager [None req-974e43e7-7aec-4d08-9363-359f6d9daf79 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Terminating instance#033[00m
Dec  3 16:33:03 np0005544708 nova_compute[241566]: 2025-12-03 21:33:03.260 241570 DEBUG oslo_concurrency.lockutils [None req-974e43e7-7aec-4d08-9363-359f6d9daf79 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Acquiring lock "refresh_cache-ab23bcbe-2091-4277-8f17-e9554b017c36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  3 16:33:03 np0005544708 nova_compute[241566]: 2025-12-03 21:33:03.260 241570 DEBUG oslo_concurrency.lockutils [None req-974e43e7-7aec-4d08-9363-359f6d9daf79 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Acquired lock "refresh_cache-ab23bcbe-2091-4277-8f17-e9554b017c36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  3 16:33:03 np0005544708 nova_compute[241566]: 2025-12-03 21:33:03.260 241570 DEBUG nova.network.neutron [None req-974e43e7-7aec-4d08-9363-359f6d9daf79 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  3 16:33:03 np0005544708 nova_compute[241566]: 2025-12-03 21:33:03.857 241570 DEBUG nova.network.neutron [None req-974e43e7-7aec-4d08-9363-359f6d9daf79 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  3 16:33:04 np0005544708 nova_compute[241566]: 2025-12-03 21:33:04.047 241570 DEBUG nova.network.neutron [None req-974e43e7-7aec-4d08-9363-359f6d9daf79 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 16:33:04 np0005544708 nova_compute[241566]: 2025-12-03 21:33:04.069 241570 DEBUG oslo_concurrency.lockutils [None req-974e43e7-7aec-4d08-9363-359f6d9daf79 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Releasing lock "refresh_cache-ab23bcbe-2091-4277-8f17-e9554b017c36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  3 16:33:04 np0005544708 nova_compute[241566]: 2025-12-03 21:33:04.070 241570 DEBUG nova.compute.manager [None req-974e43e7-7aec-4d08-9363-359f6d9daf79 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  3 16:33:04 np0005544708 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Dec  3 16:33:04 np0005544708 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 1.313s CPU time.
Dec  3 16:33:04 np0005544708 systemd-machined[203931]: Machine qemu-1-instance-00000001 terminated.
Dec  3 16:33:04 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v839: 177 pgs: 177 active+clean; 42 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 4.7 KiB/s wr, 134 op/s
Dec  3 16:33:04 np0005544708 nova_compute[241566]: 2025-12-03 21:33:04.296 241570 INFO nova.virt.libvirt.driver [-] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Instance destroyed successfully.#033[00m
Dec  3 16:33:04 np0005544708 nova_compute[241566]: 2025-12-03 21:33:04.296 241570 DEBUG nova.objects.instance [None req-974e43e7-7aec-4d08-9363-359f6d9daf79 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Lazy-loading 'resources' on Instance uuid ab23bcbe-2091-4277-8f17-e9554b017c36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  3 16:33:04 np0005544708 nova_compute[241566]: 2025-12-03 21:33:04.488 241570 INFO nova.virt.libvirt.driver [None req-974e43e7-7aec-4d08-9363-359f6d9daf79 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Deleting instance files /var/lib/nova/instances/ab23bcbe-2091-4277-8f17-e9554b017c36_del#033[00m
Dec  3 16:33:04 np0005544708 nova_compute[241566]: 2025-12-03 21:33:04.489 241570 INFO nova.virt.libvirt.driver [None req-974e43e7-7aec-4d08-9363-359f6d9daf79 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Deletion of /var/lib/nova/instances/ab23bcbe-2091-4277-8f17-e9554b017c36_del complete#033[00m
Dec  3 16:33:04 np0005544708 nova_compute[241566]: 2025-12-03 21:33:04.566 241570 INFO nova.compute.manager [None req-974e43e7-7aec-4d08-9363-359f6d9daf79 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Took 0.50 seconds to destroy the instance on the hypervisor.#033[00m
Dec  3 16:33:04 np0005544708 nova_compute[241566]: 2025-12-03 21:33:04.566 241570 DEBUG oslo.service.loopingcall [None req-974e43e7-7aec-4d08-9363-359f6d9daf79 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  3 16:33:04 np0005544708 nova_compute[241566]: 2025-12-03 21:33:04.567 241570 DEBUG nova.compute.manager [-] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  3 16:33:04 np0005544708 nova_compute[241566]: 2025-12-03 21:33:04.567 241570 DEBUG nova.network.neutron [-] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  3 16:33:04 np0005544708 nova_compute[241566]: 2025-12-03 21:33:04.849 241570 DEBUG nova.network.neutron [-] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  3 16:33:04 np0005544708 nova_compute[241566]: 2025-12-03 21:33:04.862 241570 DEBUG nova.network.neutron [-] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  3 16:33:04 np0005544708 nova_compute[241566]: 2025-12-03 21:33:04.871 241570 INFO nova.compute.manager [-] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Took 0.30 seconds to deallocate network for instance.#033[00m
Dec  3 16:33:05 np0005544708 nova_compute[241566]: 2025-12-03 21:33:05.122 241570 INFO nova.compute.manager [None req-974e43e7-7aec-4d08-9363-359f6d9daf79 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Took 0.25 seconds to detach 1 volumes for instance.#033[00m
Dec  3 16:33:05 np0005544708 nova_compute[241566]: 2025-12-03 21:33:05.124 241570 DEBUG nova.compute.manager [None req-974e43e7-7aec-4d08-9363-359f6d9daf79 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Deleting volume: 74f6cb4b-c1f6-4650-97bb-811b731c0960 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Dec  3 16:33:05 np0005544708 nova_compute[241566]: 2025-12-03 21:33:05.579 241570 DEBUG oslo_concurrency.lockutils [None req-974e43e7-7aec-4d08-9363-359f6d9daf79 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:33:05 np0005544708 nova_compute[241566]: 2025-12-03 21:33:05.580 241570 DEBUG oslo_concurrency.lockutils [None req-974e43e7-7aec-4d08-9363-359f6d9daf79 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:33:05 np0005544708 nova_compute[241566]: 2025-12-03 21:33:05.644 241570 DEBUG oslo_concurrency.processutils [None req-974e43e7-7aec-4d08-9363-359f6d9daf79 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:33:06 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  3 16:33:06 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2290692199' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  3 16:33:06 np0005544708 nova_compute[241566]: 2025-12-03 21:33:06.141 241570 DEBUG oslo_concurrency.processutils [None req-974e43e7-7aec-4d08-9363-359f6d9daf79 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:33:06 np0005544708 nova_compute[241566]: 2025-12-03 21:33:06.146 241570 DEBUG nova.compute.provider_tree [None req-974e43e7-7aec-4d08-9363-359f6d9daf79 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Inventory has not changed in ProviderTree for provider: 94aba67c-5c5e-45d0-83d1-33eb467c8775 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 16:33:06 np0005544708 nova_compute[241566]: 2025-12-03 21:33:06.158 241570 DEBUG nova.scheduler.client.report [None req-974e43e7-7aec-4d08-9363-359f6d9daf79 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Inventory has not changed for provider 94aba67c-5c5e-45d0-83d1-33eb467c8775 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 16:33:06 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:33:06 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Dec  3 16:33:06 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Dec  3 16:33:06 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Dec  3 16:33:06 np0005544708 nova_compute[241566]: 2025-12-03 21:33:06.180 241570 DEBUG oslo_concurrency.lockutils [None req-974e43e7-7aec-4d08-9363-359f6d9daf79 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:33:06 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v841: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 167 KiB/s rd, 7.3 KiB/s wr, 218 op/s
Dec  3 16:33:06 np0005544708 nova_compute[241566]: 2025-12-03 21:33:06.206 241570 INFO nova.scheduler.client.report [None req-974e43e7-7aec-4d08-9363-359f6d9daf79 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Deleted allocations for instance ab23bcbe-2091-4277-8f17-e9554b017c36#033[00m
Dec  3 16:33:06 np0005544708 nova_compute[241566]: 2025-12-03 21:33:06.259 241570 DEBUG oslo_concurrency.lockutils [None req-974e43e7-7aec-4d08-9363-359f6d9daf79 bc25c6732c60417d92846f1367ba9a4f 11092597966341b0915e8c2a6530e568 - - default default] Lock "ab23bcbe-2091-4277-8f17-e9554b017c36" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:33:06 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  3 16:33:06 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1178596790' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec  3 16:33:06 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  3 16:33:06 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1178596790' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec  3 16:33:07 np0005544708 podman[247137]: 2025-12-03 21:33:07.152255978 +0000 UTC m=+0.083216456 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec  3 16:33:07 np0005544708 podman[247136]: 2025-12-03 21:33:07.159669777 +0000 UTC m=+0.093324228 container health_status 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Dec  3 16:33:08 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v842: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 113 KiB/s rd, 4.4 KiB/s wr, 144 op/s
Dec  3 16:33:09 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:33:09.936 151937 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:b3:fa', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:85:3a:67:f5:74'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 16:33:09 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:33:09.937 151937 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 16:33:10 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v843: 177 pgs: 177 active+clean; 41 MiB data, 176 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 1.8 KiB/s wr, 55 op/s
Dec  3 16:33:11 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:33:11 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Dec  3 16:33:11 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Dec  3 16:33:11 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Dec  3 16:33:12 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v845: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 2.6 KiB/s wr, 72 op/s
Dec  3 16:33:13 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:33:13.940 151937 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f27c01e7-5b62-4209-a664-3ae50b74644d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 16:33:14 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v846: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 638 B/s wr, 9 op/s
Dec  3 16:33:15 np0005544708 nova_compute[241566]: 2025-12-03 21:33:15.386 241570 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764797580.3855567, b947bb8b-dad6-41ce-9f54-836a10775855 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 16:33:15 np0005544708 nova_compute[241566]: 2025-12-03 21:33:15.387 241570 INFO nova.compute.manager [-] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] VM Stopped (Lifecycle Event)#033[00m
Dec  3 16:33:16 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v847: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 4.3 KiB/s rd, 504 B/s wr, 7 op/s
Dec  3 16:33:16 np0005544708 nova_compute[241566]: 2025-12-03 21:33:16.350 241570 DEBUG nova.compute.manager [None req-b0abebbc-23df-47c7-a914-957ac118e66d - - - - - -] [instance: b947bb8b-dad6-41ce-9f54-836a10775855] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 16:33:16 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:33:16 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Dec  3 16:33:16 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Dec  3 16:33:16 np0005544708 ceph-mon[75204]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Dec  3 16:33:18 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v849: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 5.4 KiB/s rd, 628 B/s wr, 9 op/s
Dec  3 16:33:19 np0005544708 nova_compute[241566]: 2025-12-03 21:33:19.295 241570 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764797584.293983, ab23bcbe-2091-4277-8f17-e9554b017c36 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  3 16:33:19 np0005544708 nova_compute[241566]: 2025-12-03 21:33:19.296 241570 INFO nova.compute.manager [-] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] VM Stopped (Lifecycle Event)#033[00m
Dec  3 16:33:19 np0005544708 nova_compute[241566]: 2025-12-03 21:33:19.326 241570 DEBUG nova.compute.manager [None req-1cb47d76-34be-4bac-8a2c-e89955c35793 - - - - - -] [instance: ab23bcbe-2091-4277-8f17-e9554b017c36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  3 16:33:20 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v850: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:33:20 np0005544708 nova_compute[241566]: 2025-12-03 21:33:20.488 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:33:20 np0005544708 nova_compute[241566]: 2025-12-03 21:33:20.551 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:33:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Optimize plan auto_2025-12-03_21:33:21
Dec  3 16:33:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 16:33:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] do_upmap
Dec  3 16:33:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] pools ['images', 'vms', 'cephfs.cephfs.data', 'backups', 'volumes', 'cephfs.cephfs.meta', '.mgr']
Dec  3 16:33:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] prepared 0/10 upmap changes
Dec  3 16:33:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:33:21 np0005544708 nova_compute[241566]: 2025-12-03 21:33:21.551 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:33:21 np0005544708 nova_compute[241566]: 2025-12-03 21:33:21.551 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 16:33:21 np0005544708 nova_compute[241566]: 2025-12-03 21:33:21.552 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 16:33:21 np0005544708 nova_compute[241566]: 2025-12-03 21:33:21.573 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 16:33:21 np0005544708 nova_compute[241566]: 2025-12-03 21:33:21.573 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:33:21 np0005544708 nova_compute[241566]: 2025-12-03 21:33:21.574 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:33:21 np0005544708 nova_compute[241566]: 2025-12-03 21:33:21.574 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:33:21 np0005544708 nova_compute[241566]: 2025-12-03 21:33:21.575 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:33:21 np0005544708 nova_compute[241566]: 2025-12-03 21:33:21.575 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 16:33:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:33:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:33:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:33:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:33:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:33:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:33:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 16:33:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:33:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 16:33:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:33:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:33:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:33:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:33:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:33:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:33:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:33:22 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v851: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:33:22 np0005544708 nova_compute[241566]: 2025-12-03 21:33:22.553 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:33:23 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:33:23 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:33:23 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  3 16:33:23 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:33:23 np0005544708 nova_compute[241566]: 2025-12-03 21:33:23.551 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:33:23 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  3 16:33:23 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:33:23 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  3 16:33:23 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  3 16:33:23 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  3 16:33:23 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:33:23 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:33:23 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:33:23 np0005544708 nova_compute[241566]: 2025-12-03 21:33:23.582 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:33:23 np0005544708 nova_compute[241566]: 2025-12-03 21:33:23.583 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:33:23 np0005544708 nova_compute[241566]: 2025-12-03 21:33:23.583 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:33:23 np0005544708 nova_compute[241566]: 2025-12-03 21:33:23.583 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 16:33:23 np0005544708 nova_compute[241566]: 2025-12-03 21:33:23.584 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:33:23 np0005544708 podman[247282]: 2025-12-03 21:33:23.813369507 +0000 UTC m=+0.122275267 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller)
Dec  3 16:33:24 np0005544708 podman[247366]: 2025-12-03 21:33:24.083276898 +0000 UTC m=+0.060872797 container create 6d40f2bd1e994b59e60f82082ae0713f772e7df6d474c68eff29cab836d95980 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:33:24 np0005544708 systemd[1]: Started libpod-conmon-6d40f2bd1e994b59e60f82082ae0713f772e7df6d474c68eff29cab836d95980.scope.
Dec  3 16:33:24 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  3 16:33:24 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3406652418' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  3 16:33:24 np0005544708 podman[247366]: 2025-12-03 21:33:24.051301149 +0000 UTC m=+0.028897108 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:33:24 np0005544708 nova_compute[241566]: 2025-12-03 21:33:24.163 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:33:24 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:33:24 np0005544708 podman[247366]: 2025-12-03 21:33:24.192458091 +0000 UTC m=+0.170054050 container init 6d40f2bd1e994b59e60f82082ae0713f772e7df6d474c68eff29cab836d95980 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_galois, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:33:24 np0005544708 podman[247366]: 2025-12-03 21:33:24.206450437 +0000 UTC m=+0.184046306 container start 6d40f2bd1e994b59e60f82082ae0713f772e7df6d474c68eff29cab836d95980 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:33:24 np0005544708 podman[247366]: 2025-12-03 21:33:24.210016693 +0000 UTC m=+0.187612592 container attach 6d40f2bd1e994b59e60f82082ae0713f772e7df6d474c68eff29cab836d95980 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_galois, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:33:24 np0005544708 romantic_galois[247383]: 167 167
Dec  3 16:33:24 np0005544708 systemd[1]: libpod-6d40f2bd1e994b59e60f82082ae0713f772e7df6d474c68eff29cab836d95980.scope: Deactivated successfully.
Dec  3 16:33:24 np0005544708 conmon[247383]: conmon 6d40f2bd1e994b59e60f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6d40f2bd1e994b59e60f82082ae0713f772e7df6d474c68eff29cab836d95980.scope/container/memory.events
Dec  3 16:33:24 np0005544708 podman[247366]: 2025-12-03 21:33:24.217954696 +0000 UTC m=+0.195550575 container died 6d40f2bd1e994b59e60f82082ae0713f772e7df6d474c68eff29cab836d95980 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_galois, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec  3 16:33:24 np0005544708 systemd[1]: var-lib-containers-storage-overlay-0fdda0ee0e29b82c15256f60bc9bee099adeafd9a7a4237910160548a6ba1361-merged.mount: Deactivated successfully.
Dec  3 16:33:24 np0005544708 podman[247366]: 2025-12-03 21:33:24.26948037 +0000 UTC m=+0.247076249 container remove 6d40f2bd1e994b59e60f82082ae0713f772e7df6d474c68eff29cab836d95980 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_galois, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  3 16:33:24 np0005544708 systemd[1]: libpod-conmon-6d40f2bd1e994b59e60f82082ae0713f772e7df6d474c68eff29cab836d95980.scope: Deactivated successfully.
Dec  3 16:33:24 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v852: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:33:24 np0005544708 nova_compute[241566]: 2025-12-03 21:33:24.363 241570 WARNING nova.virt.libvirt.driver [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 16:33:24 np0005544708 nova_compute[241566]: 2025-12-03 21:33:24.364 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5165MB free_disk=59.988260054029524GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 16:33:24 np0005544708 nova_compute[241566]: 2025-12-03 21:33:24.364 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:33:24 np0005544708 nova_compute[241566]: 2025-12-03 21:33:24.365 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:33:24 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:33:24 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:33:24 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:33:24 np0005544708 nova_compute[241566]: 2025-12-03 21:33:24.420 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 16:33:24 np0005544708 nova_compute[241566]: 2025-12-03 21:33:24.421 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 16:33:24 np0005544708 nova_compute[241566]: 2025-12-03 21:33:24.444 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:33:24 np0005544708 podman[247410]: 2025-12-03 21:33:24.449426596 +0000 UTC m=+0.041444865 container create c7cd500686839ef0d4460aaf32715067a764c66f27c0c0fb3b733d2e2f13d705 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_pare, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:33:24 np0005544708 systemd[1]: Started libpod-conmon-c7cd500686839ef0d4460aaf32715067a764c66f27c0c0fb3b733d2e2f13d705.scope.
Dec  3 16:33:24 np0005544708 podman[247410]: 2025-12-03 21:33:24.427855646 +0000 UTC m=+0.019873955 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:33:24 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:33:24 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb256fb5a38a2c9b9952de8683b236616556cf5709e3b98e4c854fff4787fc36/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:33:24 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb256fb5a38a2c9b9952de8683b236616556cf5709e3b98e4c854fff4787fc36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:33:24 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb256fb5a38a2c9b9952de8683b236616556cf5709e3b98e4c854fff4787fc36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:33:24 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb256fb5a38a2c9b9952de8683b236616556cf5709e3b98e4c854fff4787fc36/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:33:24 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb256fb5a38a2c9b9952de8683b236616556cf5709e3b98e4c854fff4787fc36/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:33:24 np0005544708 podman[247410]: 2025-12-03 21:33:24.550773028 +0000 UTC m=+0.142791347 container init c7cd500686839ef0d4460aaf32715067a764c66f27c0c0fb3b733d2e2f13d705 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_pare, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:33:24 np0005544708 podman[247410]: 2025-12-03 21:33:24.563701386 +0000 UTC m=+0.155719655 container start c7cd500686839ef0d4460aaf32715067a764c66f27c0c0fb3b733d2e2f13d705 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_pare, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 16:33:24 np0005544708 podman[247410]: 2025-12-03 21:33:24.567865987 +0000 UTC m=+0.159884286 container attach c7cd500686839ef0d4460aaf32715067a764c66f27c0c0fb3b733d2e2f13d705 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_pare, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:33:24 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  3 16:33:24 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2227054090' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  3 16:33:24 np0005544708 nova_compute[241566]: 2025-12-03 21:33:24.963 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:33:24 np0005544708 nova_compute[241566]: 2025-12-03 21:33:24.971 241570 DEBUG nova.compute.provider_tree [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Inventory has not changed in ProviderTree for provider: 94aba67c-5c5e-45d0-83d1-33eb467c8775 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 16:33:24 np0005544708 sleepy_pare[247429]: --> passed data devices: 0 physical, 3 LVM
Dec  3 16:33:24 np0005544708 sleepy_pare[247429]: --> All data devices are unavailable
Dec  3 16:33:24 np0005544708 nova_compute[241566]: 2025-12-03 21:33:24.987 241570 DEBUG nova.scheduler.client.report [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Inventory has not changed for provider 94aba67c-5c5e-45d0-83d1-33eb467c8775 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 16:33:25 np0005544708 systemd[1]: libpod-c7cd500686839ef0d4460aaf32715067a764c66f27c0c0fb3b733d2e2f13d705.scope: Deactivated successfully.
Dec  3 16:33:25 np0005544708 podman[247410]: 2025-12-03 21:33:25.007126739 +0000 UTC m=+0.599145028 container died c7cd500686839ef0d4460aaf32715067a764c66f27c0c0fb3b733d2e2f13d705 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_pare, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec  3 16:33:25 np0005544708 nova_compute[241566]: 2025-12-03 21:33:25.012 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 16:33:25 np0005544708 nova_compute[241566]: 2025-12-03 21:33:25.013 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:33:25 np0005544708 systemd[1]: var-lib-containers-storage-overlay-fb256fb5a38a2c9b9952de8683b236616556cf5709e3b98e4c854fff4787fc36-merged.mount: Deactivated successfully.
Dec  3 16:33:25 np0005544708 podman[247410]: 2025-12-03 21:33:25.054381359 +0000 UTC m=+0.646399648 container remove c7cd500686839ef0d4460aaf32715067a764c66f27c0c0fb3b733d2e2f13d705 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_pare, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:33:25 np0005544708 systemd[1]: libpod-conmon-c7cd500686839ef0d4460aaf32715067a764c66f27c0c0fb3b733d2e2f13d705.scope: Deactivated successfully.
Dec  3 16:33:25 np0005544708 podman[247546]: 2025-12-03 21:33:25.620710065 +0000 UTC m=+0.068400770 container create cf7b3fb4cb978ba73a83959408ae9245bf046e55cd6bbbc0933f2642e74f35d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_keldysh, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec  3 16:33:25 np0005544708 systemd[1]: Started libpod-conmon-cf7b3fb4cb978ba73a83959408ae9245bf046e55cd6bbbc0933f2642e74f35d1.scope.
Dec  3 16:33:25 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:33:25 np0005544708 podman[247546]: 2025-12-03 21:33:25.599232997 +0000 UTC m=+0.046923722 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:33:25 np0005544708 podman[247546]: 2025-12-03 21:33:25.710953549 +0000 UTC m=+0.158644304 container init cf7b3fb4cb978ba73a83959408ae9245bf046e55cd6bbbc0933f2642e74f35d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_keldysh, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:33:25 np0005544708 podman[247546]: 2025-12-03 21:33:25.724278467 +0000 UTC m=+0.171969172 container start cf7b3fb4cb978ba73a83959408ae9245bf046e55cd6bbbc0933f2642e74f35d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:33:25 np0005544708 podman[247546]: 2025-12-03 21:33:25.728159831 +0000 UTC m=+0.175850576 container attach cf7b3fb4cb978ba73a83959408ae9245bf046e55cd6bbbc0933f2642e74f35d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_keldysh, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:33:25 np0005544708 elated_keldysh[247562]: 167 167
Dec  3 16:33:25 np0005544708 podman[247546]: 2025-12-03 21:33:25.731748527 +0000 UTC m=+0.179439272 container died cf7b3fb4cb978ba73a83959408ae9245bf046e55cd6bbbc0933f2642e74f35d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_keldysh, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:33:25 np0005544708 systemd[1]: libpod-cf7b3fb4cb978ba73a83959408ae9245bf046e55cd6bbbc0933f2642e74f35d1.scope: Deactivated successfully.
Dec  3 16:33:25 np0005544708 systemd[1]: var-lib-containers-storage-overlay-1bc2f77d000c211756d2810d0b5fa532ed97a0006e089ed97ab93352ca83e900-merged.mount: Deactivated successfully.
Dec  3 16:33:25 np0005544708 podman[247546]: 2025-12-03 21:33:25.779052668 +0000 UTC m=+0.226743403 container remove cf7b3fb4cb978ba73a83959408ae9245bf046e55cd6bbbc0933f2642e74f35d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_keldysh, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  3 16:33:25 np0005544708 systemd[1]: libpod-conmon-cf7b3fb4cb978ba73a83959408ae9245bf046e55cd6bbbc0933f2642e74f35d1.scope: Deactivated successfully.
Dec  3 16:33:26 np0005544708 nova_compute[241566]: 2025-12-03 21:33:26.008 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:33:26 np0005544708 podman[247586]: 2025-12-03 21:33:26.025748716 +0000 UTC m=+0.078614923 container create 1045982a7db5002ffa6b211417a033e99f7ac4ea9120f967a516827ac61bb9a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bouman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:33:26 np0005544708 systemd[1]: Started libpod-conmon-1045982a7db5002ffa6b211417a033e99f7ac4ea9120f967a516827ac61bb9a2.scope.
Dec  3 16:33:26 np0005544708 podman[247586]: 2025-12-03 21:33:25.996291255 +0000 UTC m=+0.049157512 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:33:26 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:33:26 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a085c9bb03b19fdea05dd52b07174b47a64d4c74eeafcb5edd867d99b112b7d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:33:26 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a085c9bb03b19fdea05dd52b07174b47a64d4c74eeafcb5edd867d99b112b7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:33:26 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a085c9bb03b19fdea05dd52b07174b47a64d4c74eeafcb5edd867d99b112b7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:33:26 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a085c9bb03b19fdea05dd52b07174b47a64d4c74eeafcb5edd867d99b112b7d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:33:26 np0005544708 podman[247586]: 2025-12-03 21:33:26.133914773 +0000 UTC m=+0.186781050 container init 1045982a7db5002ffa6b211417a033e99f7ac4ea9120f967a516827ac61bb9a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  3 16:33:26 np0005544708 podman[247586]: 2025-12-03 21:33:26.145626247 +0000 UTC m=+0.198492414 container start 1045982a7db5002ffa6b211417a033e99f7ac4ea9120f967a516827ac61bb9a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bouman, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3)
Dec  3 16:33:26 np0005544708 podman[247586]: 2025-12-03 21:33:26.14871866 +0000 UTC m=+0.201584837 container attach 1045982a7db5002ffa6b211417a033e99f7ac4ea9120f967a516827ac61bb9a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bouman, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  3 16:33:26 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v853: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:33:26 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]: {
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:    "0": [
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:        {
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:            "devices": [
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "/dev/loop3"
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:            ],
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:            "lv_name": "ceph_lv0",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:            "lv_size": "21470642176",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d33bc95-baf8-481d-bc78-3b15ffd29872,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:            "lv_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:            "name": "ceph_lv0",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:            "tags": {
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.block_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.cluster_name": "ceph",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.crush_device_class": "",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.encrypted": "0",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.objectstore": "bluestore",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.osd_fsid": "4d33bc95-baf8-481d-bc78-3b15ffd29872",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.osd_id": "0",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.type": "block",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.vdo": "0",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.with_tpm": "0"
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:            },
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:            "type": "block",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:            "vg_name": "ceph_vg0"
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:        }
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:    ],
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:    "1": [
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:        {
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:            "devices": [
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "/dev/loop4"
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:            ],
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:            "lv_name": "ceph_lv1",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:            "lv_size": "21470642176",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c4086f1b-ff53-4e63-8dc0-011238d77976,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:            "lv_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:            "name": "ceph_lv1",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:            "tags": {
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.block_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.cluster_name": "ceph",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.crush_device_class": "",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.encrypted": "0",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.objectstore": "bluestore",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.osd_fsid": "c4086f1b-ff53-4e63-8dc0-011238d77976",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.osd_id": "1",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.type": "block",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.vdo": "0",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.with_tpm": "0"
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:            },
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:            "type": "block",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:            "vg_name": "ceph_vg1"
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:        }
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:    ],
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:    "2": [
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:        {
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:            "devices": [
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "/dev/loop5"
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:            ],
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:            "lv_name": "ceph_lv2",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:            "lv_size": "21470642176",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=abcd6a67-9013-4470-978f-f75da5f33cd4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:            "lv_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:            "name": "ceph_lv2",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:            "tags": {
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.block_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.cluster_name": "ceph",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.crush_device_class": "",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.encrypted": "0",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.objectstore": "bluestore",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.osd_fsid": "abcd6a67-9013-4470-978f-f75da5f33cd4",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.osd_id": "2",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.type": "block",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.vdo": "0",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:                "ceph.with_tpm": "0"
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:            },
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:            "type": "block",
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:            "vg_name": "ceph_vg2"
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:        }
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]:    ]
Dec  3 16:33:26 np0005544708 frosty_bouman[247602]: }
Dec  3 16:33:26 np0005544708 systemd[1]: libpod-1045982a7db5002ffa6b211417a033e99f7ac4ea9120f967a516827ac61bb9a2.scope: Deactivated successfully.
Dec  3 16:33:26 np0005544708 podman[247586]: 2025-12-03 21:33:26.512306829 +0000 UTC m=+0.565173066 container died 1045982a7db5002ffa6b211417a033e99f7ac4ea9120f967a516827ac61bb9a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec  3 16:33:26 np0005544708 systemd[1]: var-lib-containers-storage-overlay-8a085c9bb03b19fdea05dd52b07174b47a64d4c74eeafcb5edd867d99b112b7d-merged.mount: Deactivated successfully.
Dec  3 16:33:26 np0005544708 podman[247586]: 2025-12-03 21:33:26.571830998 +0000 UTC m=+0.624697205 container remove 1045982a7db5002ffa6b211417a033e99f7ac4ea9120f967a516827ac61bb9a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bouman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:33:26 np0005544708 systemd[1]: libpod-conmon-1045982a7db5002ffa6b211417a033e99f7ac4ea9120f967a516827ac61bb9a2.scope: Deactivated successfully.
Dec  3 16:33:27 np0005544708 podman[247685]: 2025-12-03 21:33:27.113634325 +0000 UTC m=+0.061896834 container create 6afe8d8cdd8d66d096998bed0356ce4ed976ba57f72e00ae3acf380c14d7d077 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_rhodes, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:33:27 np0005544708 systemd[1]: Started libpod-conmon-6afe8d8cdd8d66d096998bed0356ce4ed976ba57f72e00ae3acf380c14d7d077.scope.
Dec  3 16:33:27 np0005544708 podman[247685]: 2025-12-03 21:33:27.092309982 +0000 UTC m=+0.040572471 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:33:27 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:33:27 np0005544708 podman[247685]: 2025-12-03 21:33:27.216391866 +0000 UTC m=+0.164654415 container init 6afe8d8cdd8d66d096998bed0356ce4ed976ba57f72e00ae3acf380c14d7d077 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_rhodes, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  3 16:33:27 np0005544708 podman[247685]: 2025-12-03 21:33:27.224676728 +0000 UTC m=+0.172939237 container start 6afe8d8cdd8d66d096998bed0356ce4ed976ba57f72e00ae3acf380c14d7d077 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:33:27 np0005544708 sleepy_rhodes[247701]: 167 167
Dec  3 16:33:27 np0005544708 podman[247685]: 2025-12-03 21:33:27.228665266 +0000 UTC m=+0.176927815 container attach 6afe8d8cdd8d66d096998bed0356ce4ed976ba57f72e00ae3acf380c14d7d077 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_rhodes, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec  3 16:33:27 np0005544708 systemd[1]: libpod-6afe8d8cdd8d66d096998bed0356ce4ed976ba57f72e00ae3acf380c14d7d077.scope: Deactivated successfully.
Dec  3 16:33:27 np0005544708 podman[247685]: 2025-12-03 21:33:27.229757035 +0000 UTC m=+0.178019564 container died 6afe8d8cdd8d66d096998bed0356ce4ed976ba57f72e00ae3acf380c14d7d077 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 16:33:27 np0005544708 systemd[1]: var-lib-containers-storage-overlay-69f9a57bc1366843403f4948762dcbf12a4353d73daaa7487b37ba1d8e965499-merged.mount: Deactivated successfully.
Dec  3 16:33:27 np0005544708 podman[247685]: 2025-12-03 21:33:27.266479461 +0000 UTC m=+0.214741940 container remove 6afe8d8cdd8d66d096998bed0356ce4ed976ba57f72e00ae3acf380c14d7d077 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:33:27 np0005544708 systemd[1]: libpod-conmon-6afe8d8cdd8d66d096998bed0356ce4ed976ba57f72e00ae3acf380c14d7d077.scope: Deactivated successfully.
Dec  3 16:33:27 np0005544708 podman[247724]: 2025-12-03 21:33:27.444164246 +0000 UTC m=+0.038397373 container create 84b021d7f0df9e55a8269a3eb005c7c077902ea30d619aaf42740d8497f254b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_bell, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:33:27 np0005544708 systemd[1]: Started libpod-conmon-84b021d7f0df9e55a8269a3eb005c7c077902ea30d619aaf42740d8497f254b6.scope.
Dec  3 16:33:27 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:33:27 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01fa77910200371e1cfb1c650b7c576686bc56e773e383bb14561e2490e9c7cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:33:27 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01fa77910200371e1cfb1c650b7c576686bc56e773e383bb14561e2490e9c7cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:33:27 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01fa77910200371e1cfb1c650b7c576686bc56e773e383bb14561e2490e9c7cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:33:27 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01fa77910200371e1cfb1c650b7c576686bc56e773e383bb14561e2490e9c7cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:33:27 np0005544708 podman[247724]: 2025-12-03 21:33:27.424840907 +0000 UTC m=+0.019074054 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:33:27 np0005544708 podman[247724]: 2025-12-03 21:33:27.541988224 +0000 UTC m=+0.136221371 container init 84b021d7f0df9e55a8269a3eb005c7c077902ea30d619aaf42740d8497f254b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  3 16:33:27 np0005544708 podman[247724]: 2025-12-03 21:33:27.553667648 +0000 UTC m=+0.147900785 container start 84b021d7f0df9e55a8269a3eb005c7c077902ea30d619aaf42740d8497f254b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_bell, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec  3 16:33:27 np0005544708 podman[247724]: 2025-12-03 21:33:27.557254814 +0000 UTC m=+0.151487961 container attach 84b021d7f0df9e55a8269a3eb005c7c077902ea30d619aaf42740d8497f254b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  3 16:33:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 16:33:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:33:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 16:33:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:33:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 3.533351853729544e-07 of space, bias 1.0, pg target 0.00010600055561188632 quantized to 32 (current 32)
Dec  3 16:33:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:33:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.833582600959152e-08 of space, bias 1.0, pg target 2.9500747802877454e-05 quantized to 32 (current 32)
Dec  3 16:33:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:33:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:33:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:33:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006678894471709225 of space, bias 1.0, pg target 0.20036683415127676 quantized to 32 (current 32)
Dec  3 16:33:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:33:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.953496000112683e-07 of space, bias 4.0, pg target 0.0009544195200135219 quantized to 16 (current 16)
Dec  3 16:33:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:33:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:33:28 np0005544708 lvm[247820]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:33:28 np0005544708 lvm[247820]: VG ceph_vg1 finished
Dec  3 16:33:28 np0005544708 lvm[247817]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:33:28 np0005544708 lvm[247817]: VG ceph_vg0 finished
Dec  3 16:33:28 np0005544708 lvm[247821]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:33:28 np0005544708 lvm[247821]: VG ceph_vg2 finished
Dec  3 16:33:28 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v854: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:33:28 np0005544708 trusting_bell[247740]: {}
Dec  3 16:33:28 np0005544708 systemd[1]: libpod-84b021d7f0df9e55a8269a3eb005c7c077902ea30d619aaf42740d8497f254b6.scope: Deactivated successfully.
Dec  3 16:33:28 np0005544708 systemd[1]: libpod-84b021d7f0df9e55a8269a3eb005c7c077902ea30d619aaf42740d8497f254b6.scope: Consumed 1.331s CPU time.
Dec  3 16:33:28 np0005544708 podman[247724]: 2025-12-03 21:33:28.393369228 +0000 UTC m=+0.987602385 container died 84b021d7f0df9e55a8269a3eb005c7c077902ea30d619aaf42740d8497f254b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_bell, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec  3 16:33:28 np0005544708 systemd[1]: var-lib-containers-storage-overlay-01fa77910200371e1cfb1c650b7c576686bc56e773e383bb14561e2490e9c7cb-merged.mount: Deactivated successfully.
Dec  3 16:33:28 np0005544708 podman[247724]: 2025-12-03 21:33:28.443896586 +0000 UTC m=+1.038129733 container remove 84b021d7f0df9e55a8269a3eb005c7c077902ea30d619aaf42740d8497f254b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:33:28 np0005544708 systemd[1]: libpod-conmon-84b021d7f0df9e55a8269a3eb005c7c077902ea30d619aaf42740d8497f254b6.scope: Deactivated successfully.
Dec  3 16:33:28 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:33:28 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:33:28 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:33:28 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:33:29 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:33:29 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:33:30 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v855: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:33:31 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:33:32 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v856: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:33:34 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v857: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:33:36 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:33:36 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v858: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:33:38 np0005544708 podman[247860]: 2025-12-03 21:33:38.166746069 +0000 UTC m=+0.086016272 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  3 16:33:38 np0005544708 podman[247859]: 2025-12-03 21:33:38.169448231 +0000 UTC m=+0.089911816 container health_status 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  3 16:33:38 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v859: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:33:40 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v860: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:33:41 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:33:42 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v861: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:33:44 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v862: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:33:46 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:33:46 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v863: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:33:48 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v864: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:33:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:33:48.937 151937 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:33:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:33:48.938 151937 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:33:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:33:48.938 151937 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:33:50 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v865: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:33:51 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:33:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:33:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:33:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:33:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:33:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:33:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:33:52 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v866: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:33:54 np0005544708 podman[247897]: 2025-12-03 21:33:54.230320303 +0000 UTC m=+0.167209444 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 16:33:54 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v867: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:33:56 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:33:56 np0005544708 ceph-mon[75204]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Dec  3 16:33:56 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:33:56.364797) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 16:33:56 np0005544708 ceph-mon[75204]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Dec  3 16:33:56 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797636364917, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1989, "num_deletes": 256, "total_data_size": 2085540, "memory_usage": 2135448, "flush_reason": "Manual Compaction"}
Dec  3 16:33:56 np0005544708 ceph-mon[75204]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Dec  3 16:33:56 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v868: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:33:56 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797636378993, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 1461730, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15713, "largest_seqno": 17701, "table_properties": {"data_size": 1454049, "index_size": 4435, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17812, "raw_average_key_size": 20, "raw_value_size": 1437803, "raw_average_value_size": 1689, "num_data_blocks": 199, "num_entries": 851, "num_filter_entries": 851, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764797488, "oldest_key_time": 1764797488, "file_creation_time": 1764797636, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d83d641b-0db7-44b5-9540-349f4c36f664", "db_session_id": "YRQHTOJ9E78VAMDNI6U1", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Dec  3 16:33:56 np0005544708 ceph-mon[75204]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 14318 microseconds, and 8581 cpu microseconds.
Dec  3 16:33:56 np0005544708 ceph-mon[75204]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 16:33:56 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:33:56.379128) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 1461730 bytes OK
Dec  3 16:33:56 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:33:56.379161) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Dec  3 16:33:56 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:33:56.380734) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Dec  3 16:33:56 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:33:56.380759) EVENT_LOG_v1 {"time_micros": 1764797636380753, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 16:33:56 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:33:56.380788) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 16:33:56 np0005544708 ceph-mon[75204]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 2077013, prev total WAL file size 2077013, number of live WAL files 2.
Dec  3 16:33:56 np0005544708 ceph-mon[75204]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 16:33:56 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:33:56.381951) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353032' seq:72057594037927935, type:22 .. '6D67727374617400373533' seq:0, type:0; will stop at (end)
Dec  3 16:33:56 np0005544708 ceph-mon[75204]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 16:33:56 np0005544708 ceph-mon[75204]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(1427KB)], [38(5681KB)]
Dec  3 16:33:56 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797636382040, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 7279717, "oldest_snapshot_seqno": -1}
Dec  3 16:33:56 np0005544708 ceph-mon[75204]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 3927 keys, 5741631 bytes, temperature: kUnknown
Dec  3 16:33:56 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797636431003, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 5741631, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5713727, "index_size": 16978, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9861, "raw_key_size": 92302, "raw_average_key_size": 23, "raw_value_size": 5641511, "raw_average_value_size": 1436, "num_data_blocks": 731, "num_entries": 3927, "num_filter_entries": 3927, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764796079, "oldest_key_time": 0, "file_creation_time": 1764797636, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d83d641b-0db7-44b5-9540-349f4c36f664", "db_session_id": "YRQHTOJ9E78VAMDNI6U1", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Dec  3 16:33:56 np0005544708 ceph-mon[75204]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 16:33:56 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:33:56.431434) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 5741631 bytes
Dec  3 16:33:56 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:33:56.433108) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 148.3 rd, 117.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 5.5 +0.0 blob) out(5.5 +0.0 blob), read-write-amplify(8.9) write-amplify(3.9) OK, records in: 4381, records dropped: 454 output_compression: NoCompression
Dec  3 16:33:56 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:33:56.433138) EVENT_LOG_v1 {"time_micros": 1764797636433123, "job": 18, "event": "compaction_finished", "compaction_time_micros": 49083, "compaction_time_cpu_micros": 29663, "output_level": 6, "num_output_files": 1, "total_output_size": 5741631, "num_input_records": 4381, "num_output_records": 3927, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 16:33:56 np0005544708 ceph-mon[75204]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 16:33:56 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797636433843, "job": 18, "event": "table_file_deletion", "file_number": 40}
Dec  3 16:33:56 np0005544708 ceph-mon[75204]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 16:33:56 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797636435895, "job": 18, "event": "table_file_deletion", "file_number": 38}
Dec  3 16:33:56 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:33:56.381802) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:33:56 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:33:56.435950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:33:56 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:33:56.435957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:33:56 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:33:56.435960) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:33:56 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:33:56.435963) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:33:56 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:33:56.435967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:33:58 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v869: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:33:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  3 16:33:59 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2237149326' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec  3 16:33:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  3 16:33:59 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2237149326' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec  3 16:34:00 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v870: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:34:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:34:02 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v871: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:34:04 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v872: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:34:04 np0005544708 systemd-logind[787]: New session 51 of user zuul.
Dec  3 16:34:04 np0005544708 systemd[1]: Started Session 51 of User zuul.
Dec  3 16:34:06 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:34:06 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v873: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:34:07 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14696 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:34:08 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14698 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:34:08 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v874: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:34:08 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Dec  3 16:34:08 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3661501632' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec  3 16:34:09 np0005544708 podman[248152]: 2025-12-03 21:34:09.14241228 +0000 UTC m=+0.078901992 container health_status 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec  3 16:34:09 np0005544708 podman[248153]: 2025-12-03 21:34:09.149554001 +0000 UTC m=+0.077936395 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 16:34:10 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v875: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:34:11 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:34:12 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v876: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:34:14 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v877: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:34:16 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:34:16 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v878: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:34:18 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v879: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:34:19 np0005544708 nova_compute[241566]: 2025-12-03 21:34:19.567 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:34:20 np0005544708 ceph-osd[86059]: bluestore.MempoolThread fragmentation_score=0.000120 took=0.000018s
Dec  3 16:34:20 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v880: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:34:20 np0005544708 ceph-osd[88129]: bluestore.MempoolThread fragmentation_score=0.000147 took=0.000030s
Dec  3 16:34:20 np0005544708 ceph-osd[87094]: bluestore.MempoolThread fragmentation_score=0.000123 took=0.000012s
Dec  3 16:34:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Optimize plan auto_2025-12-03_21:34:21
Dec  3 16:34:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 16:34:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] do_upmap
Dec  3 16:34:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'backups', 'volumes', 'vms', '.mgr', 'cephfs.cephfs.data']
Dec  3 16:34:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] prepared 0/10 upmap changes
Dec  3 16:34:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:34:21 np0005544708 nova_compute[241566]: 2025-12-03 21:34:21.551 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:34:21 np0005544708 nova_compute[241566]: 2025-12-03 21:34:21.552 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 16:34:21 np0005544708 nova_compute[241566]: 2025-12-03 21:34:21.552 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 16:34:21 np0005544708 nova_compute[241566]: 2025-12-03 21:34:21.647 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 16:34:21 np0005544708 nova_compute[241566]: 2025-12-03 21:34:21.648 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:34:21 np0005544708 nova_compute[241566]: 2025-12-03 21:34:21.648 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:34:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:34:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:34:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:34:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:34:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:34:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:34:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 16:34:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:34:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 16:34:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:34:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:34:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:34:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:34:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:34:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:34:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:34:22 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v881: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:34:22 np0005544708 nova_compute[241566]: 2025-12-03 21:34:22.551 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:34:22 np0005544708 nova_compute[241566]: 2025-12-03 21:34:22.551 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:34:22 np0005544708 nova_compute[241566]: 2025-12-03 21:34:22.552 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 16:34:23 np0005544708 nova_compute[241566]: 2025-12-03 21:34:23.550 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:34:23 np0005544708 nova_compute[241566]: 2025-12-03 21:34:23.551 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:34:23 np0005544708 nova_compute[241566]: 2025-12-03 21:34:23.577 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:34:23 np0005544708 nova_compute[241566]: 2025-12-03 21:34:23.577 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:34:23 np0005544708 nova_compute[241566]: 2025-12-03 21:34:23.578 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:34:23 np0005544708 nova_compute[241566]: 2025-12-03 21:34:23.578 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 16:34:23 np0005544708 nova_compute[241566]: 2025-12-03 21:34:23.578 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:34:24 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  3 16:34:24 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/201796836' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  3 16:34:24 np0005544708 nova_compute[241566]: 2025-12-03 21:34:24.154 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:34:24 np0005544708 nova_compute[241566]: 2025-12-03 21:34:24.357 241570 WARNING nova.virt.libvirt.driver [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 16:34:24 np0005544708 nova_compute[241566]: 2025-12-03 21:34:24.358 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5075MB free_disk=59.988260054029524GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 16:34:24 np0005544708 nova_compute[241566]: 2025-12-03 21:34:24.359 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:34:24 np0005544708 nova_compute[241566]: 2025-12-03 21:34:24.359 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:34:24 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v882: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:34:24 np0005544708 nova_compute[241566]: 2025-12-03 21:34:24.445 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 16:34:24 np0005544708 nova_compute[241566]: 2025-12-03 21:34:24.446 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 16:34:24 np0005544708 nova_compute[241566]: 2025-12-03 21:34:24.468 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:34:24 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  3 16:34:24 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4091558153' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  3 16:34:24 np0005544708 nova_compute[241566]: 2025-12-03 21:34:24.968 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:34:24 np0005544708 nova_compute[241566]: 2025-12-03 21:34:24.975 241570 DEBUG nova.compute.provider_tree [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Inventory has not changed in ProviderTree for provider: 94aba67c-5c5e-45d0-83d1-33eb467c8775 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 16:34:24 np0005544708 nova_compute[241566]: 2025-12-03 21:34:24.993 241570 DEBUG nova.scheduler.client.report [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Inventory has not changed for provider 94aba67c-5c5e-45d0-83d1-33eb467c8775 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 16:34:24 np0005544708 nova_compute[241566]: 2025-12-03 21:34:24.996 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 16:34:24 np0005544708 nova_compute[241566]: 2025-12-03 21:34:24.996 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:34:25 np0005544708 podman[248259]: 2025-12-03 21:34:25.137438122 +0000 UTC m=+0.148302035 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  3 16:34:26 np0005544708 nova_compute[241566]: 2025-12-03 21:34:25.998 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:34:26 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:34:26 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v883: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:34:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 16:34:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:34:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 16:34:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:34:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 3.533351853729544e-07 of space, bias 1.0, pg target 0.00010600055561188632 quantized to 32 (current 32)
Dec  3 16:34:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:34:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.833582600959152e-08 of space, bias 1.0, pg target 2.9500747802877454e-05 quantized to 32 (current 32)
Dec  3 16:34:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:34:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:34:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:34:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006678894471709225 of space, bias 1.0, pg target 0.20036683415127676 quantized to 32 (current 32)
Dec  3 16:34:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:34:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.953496000112683e-07 of space, bias 4.0, pg target 0.0009544195200135219 quantized to 16 (current 16)
Dec  3 16:34:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:34:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:34:28 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v884: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:34:29 np0005544708 ovs-vsctl[248404]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:34:29.635330) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797669635402, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 512, "num_deletes": 251, "total_data_size": 328859, "memory_usage": 339368, "flush_reason": "Manual Compaction"}
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797669640391, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 324122, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17702, "largest_seqno": 18213, "table_properties": {"data_size": 321332, "index_size": 826, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6647, "raw_average_key_size": 18, "raw_value_size": 315778, "raw_average_value_size": 892, "num_data_blocks": 38, "num_entries": 354, "num_filter_entries": 354, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764797637, "oldest_key_time": 1764797637, "file_creation_time": 1764797669, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d83d641b-0db7-44b5-9540-349f4c36f664", "db_session_id": "YRQHTOJ9E78VAMDNI6U1", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 5093 microseconds, and 1827 cpu microseconds.
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:34:29.640432) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 324122 bytes OK
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:34:29.640452) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:34:29.641691) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:34:29.641707) EVENT_LOG_v1 {"time_micros": 1764797669641702, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:34:29.641727) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 325912, prev total WAL file size 325912, number of live WAL files 2.
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:34:29.642291) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(316KB)], [41(5607KB)]
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797669642351, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 6065753, "oldest_snapshot_seqno": -1}
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 3772 keys, 4891771 bytes, temperature: kUnknown
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797669741044, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 4891771, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4866124, "index_size": 15106, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9477, "raw_key_size": 89730, "raw_average_key_size": 23, "raw_value_size": 4797795, "raw_average_value_size": 1271, "num_data_blocks": 644, "num_entries": 3772, "num_filter_entries": 3772, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764796079, "oldest_key_time": 0, "file_creation_time": 1764797669, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d83d641b-0db7-44b5-9540-349f4c36f664", "db_session_id": "YRQHTOJ9E78VAMDNI6U1", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:34:29.741598) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 4891771 bytes
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:34:29.746144) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 61.3 rd, 49.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 5.5 +0.0 blob) out(4.7 +0.0 blob), read-write-amplify(33.8) write-amplify(15.1) OK, records in: 4281, records dropped: 509 output_compression: NoCompression
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:34:29.746163) EVENT_LOG_v1 {"time_micros": 1764797669746152, "job": 20, "event": "compaction_finished", "compaction_time_micros": 98982, "compaction_time_cpu_micros": 12616, "output_level": 6, "num_output_files": 1, "total_output_size": 4891771, "num_input_records": 4281, "num_output_records": 3772, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797669746596, "job": 20, "event": "table_file_deletion", "file_number": 43}
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797669748044, "job": 20, "event": "table_file_deletion", "file_number": 41}
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:34:29.642230) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:34:29.748082) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:34:29.748088) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:34:29.748090) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:34:29.748092) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:34:29.748094) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:34:29 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:34:30 np0005544708 podman[248545]: 2025-12-03 21:34:30.13600977 +0000 UTC m=+0.066964680 container create c786748fb59db54b3aee60dee873e16a5cb66fdd6575ed46dd4ce1b33ae3e9fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:34:30 np0005544708 systemd[1]: Started libpod-conmon-c786748fb59db54b3aee60dee873e16a5cb66fdd6575ed46dd4ce1b33ae3e9fe.scope.
Dec  3 16:34:30 np0005544708 podman[248545]: 2025-12-03 21:34:30.10698383 +0000 UTC m=+0.037938790 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:34:30 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:34:30 np0005544708 podman[248545]: 2025-12-03 21:34:30.232932764 +0000 UTC m=+0.163887714 container init c786748fb59db54b3aee60dee873e16a5cb66fdd6575ed46dd4ce1b33ae3e9fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 16:34:30 np0005544708 podman[248545]: 2025-12-03 21:34:30.241288658 +0000 UTC m=+0.172243528 container start c786748fb59db54b3aee60dee873e16a5cb66fdd6575ed46dd4ce1b33ae3e9fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:34:30 np0005544708 podman[248545]: 2025-12-03 21:34:30.2454374 +0000 UTC m=+0.176392280 container attach c786748fb59db54b3aee60dee873e16a5cb66fdd6575ed46dd4ce1b33ae3e9fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_lewin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:34:30 np0005544708 cranky_lewin[248615]: 167 167
Dec  3 16:34:30 np0005544708 virtqemud[241184]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec  3 16:34:30 np0005544708 systemd[1]: libpod-c786748fb59db54b3aee60dee873e16a5cb66fdd6575ed46dd4ce1b33ae3e9fe.scope: Deactivated successfully.
Dec  3 16:34:30 np0005544708 podman[248545]: 2025-12-03 21:34:30.2890074 +0000 UTC m=+0.219962290 container died c786748fb59db54b3aee60dee873e16a5cb66fdd6575ed46dd4ce1b33ae3e9fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_lewin, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:34:30 np0005544708 systemd[1]: var-lib-containers-storage-overlay-35717595093e0aa56daa12735f5fbc804dca7256a8282693b66e33e3d4c5901c-merged.mount: Deactivated successfully.
Dec  3 16:34:30 np0005544708 virtqemud[241184]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec  3 16:34:30 np0005544708 podman[248545]: 2025-12-03 21:34:30.333803664 +0000 UTC m=+0.264758554 container remove c786748fb59db54b3aee60dee873e16a5cb66fdd6575ed46dd4ce1b33ae3e9fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:34:30 np0005544708 systemd[1]: libpod-conmon-c786748fb59db54b3aee60dee873e16a5cb66fdd6575ed46dd4ce1b33ae3e9fe.scope: Deactivated successfully.
Dec  3 16:34:30 np0005544708 virtqemud[241184]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec  3 16:34:30 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v885: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:34:30 np0005544708 podman[248662]: 2025-12-03 21:34:30.579364082 +0000 UTC m=+0.066852248 container create a351f01f75480eec117665002a5d2f6893f42de06caa923cf0f263ead9300afa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_dubinsky, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec  3 16:34:30 np0005544708 systemd[1]: Started libpod-conmon-a351f01f75480eec117665002a5d2f6893f42de06caa923cf0f263ead9300afa.scope.
Dec  3 16:34:30 np0005544708 podman[248662]: 2025-12-03 21:34:30.554673068 +0000 UTC m=+0.042161264 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:34:30 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:34:30 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac92f578ffd9f553ae45ec60f54c8a3c7f9e25e8ec2b0a65646bce4c645c3e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:34:30 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac92f578ffd9f553ae45ec60f54c8a3c7f9e25e8ec2b0a65646bce4c645c3e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:34:30 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac92f578ffd9f553ae45ec60f54c8a3c7f9e25e8ec2b0a65646bce4c645c3e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:34:30 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac92f578ffd9f553ae45ec60f54c8a3c7f9e25e8ec2b0a65646bce4c645c3e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:34:30 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac92f578ffd9f553ae45ec60f54c8a3c7f9e25e8ec2b0a65646bce4c645c3e9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:34:30 np0005544708 podman[248662]: 2025-12-03 21:34:30.710205457 +0000 UTC m=+0.197693653 container init a351f01f75480eec117665002a5d2f6893f42de06caa923cf0f263ead9300afa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:34:30 np0005544708 podman[248662]: 2025-12-03 21:34:30.722131937 +0000 UTC m=+0.209620133 container start a351f01f75480eec117665002a5d2f6893f42de06caa923cf0f263ead9300afa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_dubinsky, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:34:30 np0005544708 podman[248662]: 2025-12-03 21:34:30.726491424 +0000 UTC m=+0.213979620 container attach a351f01f75480eec117665002a5d2f6893f42de06caa923cf0f263ead9300afa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_dubinsky, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:34:30 np0005544708 ceph-mds[93586]: mds.cephfs.compute-0.gzkqle asok_command: cache status {prefix=cache status} (starting...)
Dec  3 16:34:31 np0005544708 ceph-mds[93586]: mds.cephfs.compute-0.gzkqle asok_command: client ls {prefix=client ls} (starting...)
Dec  3 16:34:31 np0005544708 pensive_dubinsky[248714]: --> passed data devices: 0 physical, 3 LVM
Dec  3 16:34:31 np0005544708 pensive_dubinsky[248714]: --> All data devices are unavailable
Dec  3 16:34:31 np0005544708 systemd[1]: libpod-a351f01f75480eec117665002a5d2f6893f42de06caa923cf0f263ead9300afa.scope: Deactivated successfully.
Dec  3 16:34:31 np0005544708 podman[248662]: 2025-12-03 21:34:31.295506103 +0000 UTC m=+0.782994279 container died a351f01f75480eec117665002a5d2f6893f42de06caa923cf0f263ead9300afa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_dubinsky, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:34:31 np0005544708 systemd[1]: var-lib-containers-storage-overlay-1ac92f578ffd9f553ae45ec60f54c8a3c7f9e25e8ec2b0a65646bce4c645c3e9-merged.mount: Deactivated successfully.
Dec  3 16:34:31 np0005544708 podman[248662]: 2025-12-03 21:34:31.34083958 +0000 UTC m=+0.828327736 container remove a351f01f75480eec117665002a5d2f6893f42de06caa923cf0f263ead9300afa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_dubinsky, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec  3 16:34:31 np0005544708 systemd[1]: libpod-conmon-a351f01f75480eec117665002a5d2f6893f42de06caa923cf0f263ead9300afa.scope: Deactivated successfully.
Dec  3 16:34:31 np0005544708 lvm[248920]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:34:31 np0005544708 lvm[248920]: VG ceph_vg2 finished
Dec  3 16:34:31 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:34:31 np0005544708 lvm[248932]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:34:31 np0005544708 lvm[248932]: VG ceph_vg1 finished
Dec  3 16:34:31 np0005544708 lvm[248947]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:34:31 np0005544708 lvm[248947]: VG ceph_vg0 finished
Dec  3 16:34:31 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14706 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:34:31 np0005544708 ceph-mds[93586]: mds.cephfs.compute-0.gzkqle asok_command: damage ls {prefix=damage ls} (starting...)
Dec  3 16:34:31 np0005544708 podman[249032]: 2025-12-03 21:34:31.830984249 +0000 UTC m=+0.055738388 container create 8dc3c05de104816f63a71920d4372e1a8b4de4354b790278ca34b5c19869c227 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:34:31 np0005544708 systemd[1]: Started libpod-conmon-8dc3c05de104816f63a71920d4372e1a8b4de4354b790278ca34b5c19869c227.scope.
Dec  3 16:34:31 np0005544708 podman[249032]: 2025-12-03 21:34:31.800988023 +0000 UTC m=+0.025742192 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:34:31 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:34:31 np0005544708 podman[249032]: 2025-12-03 21:34:31.914126352 +0000 UTC m=+0.138880481 container init 8dc3c05de104816f63a71920d4372e1a8b4de4354b790278ca34b5c19869c227 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_shannon, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Dec  3 16:34:31 np0005544708 podman[249032]: 2025-12-03 21:34:31.921595373 +0000 UTC m=+0.146349512 container start 8dc3c05de104816f63a71920d4372e1a8b4de4354b790278ca34b5c19869c227 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_shannon, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  3 16:34:31 np0005544708 podman[249032]: 2025-12-03 21:34:31.924549713 +0000 UTC m=+0.149303822 container attach 8dc3c05de104816f63a71920d4372e1a8b4de4354b790278ca34b5c19869c227 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 16:34:31 np0005544708 affectionate_shannon[249076]: 167 167
Dec  3 16:34:31 np0005544708 systemd[1]: libpod-8dc3c05de104816f63a71920d4372e1a8b4de4354b790278ca34b5c19869c227.scope: Deactivated successfully.
Dec  3 16:34:31 np0005544708 podman[249032]: 2025-12-03 21:34:31.928662024 +0000 UTC m=+0.153416163 container died 8dc3c05de104816f63a71920d4372e1a8b4de4354b790278ca34b5c19869c227 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_shannon, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec  3 16:34:31 np0005544708 ceph-mds[93586]: mds.cephfs.compute-0.gzkqle asok_command: dump loads {prefix=dump loads} (starting...)
Dec  3 16:34:31 np0005544708 systemd[1]: var-lib-containers-storage-overlay-c294490d04a6e536c725b6265712c02c411952d1b786dce2ee6c87c06e7e59c6-merged.mount: Deactivated successfully.
Dec  3 16:34:31 np0005544708 podman[249032]: 2025-12-03 21:34:31.966689765 +0000 UTC m=+0.191443914 container remove 8dc3c05de104816f63a71920d4372e1a8b4de4354b790278ca34b5c19869c227 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_shannon, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:34:31 np0005544708 systemd[1]: libpod-conmon-8dc3c05de104816f63a71920d4372e1a8b4de4354b790278ca34b5c19869c227.scope: Deactivated successfully.
Dec  3 16:34:32 np0005544708 ceph-mds[93586]: mds.cephfs.compute-0.gzkqle asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec  3 16:34:32 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14708 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:34:32 np0005544708 podman[249125]: 2025-12-03 21:34:32.145268953 +0000 UTC m=+0.044247780 container create 3ab8d62106fcfd8e54573db20c20d964ab9ff604d4873f32a2199735cd9a9abd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_hofstadter, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  3 16:34:32 np0005544708 systemd[1]: Started libpod-conmon-3ab8d62106fcfd8e54573db20c20d964ab9ff604d4873f32a2199735cd9a9abd.scope.
Dec  3 16:34:32 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:34:32 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94b466ca325c8b58e4a11da9187f605a6ecf99a8e073d78ae34ac0df365f9ebb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:34:32 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94b466ca325c8b58e4a11da9187f605a6ecf99a8e073d78ae34ac0df365f9ebb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:34:32 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94b466ca325c8b58e4a11da9187f605a6ecf99a8e073d78ae34ac0df365f9ebb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:34:32 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94b466ca325c8b58e4a11da9187f605a6ecf99a8e073d78ae34ac0df365f9ebb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:34:32 np0005544708 ceph-mds[93586]: mds.cephfs.compute-0.gzkqle asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec  3 16:34:32 np0005544708 podman[249125]: 2025-12-03 21:34:32.213158437 +0000 UTC m=+0.112137274 container init 3ab8d62106fcfd8e54573db20c20d964ab9ff604d4873f32a2199735cd9a9abd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 16:34:32 np0005544708 podman[249125]: 2025-12-03 21:34:32.221676906 +0000 UTC m=+0.120655753 container start 3ab8d62106fcfd8e54573db20c20d964ab9ff604d4873f32a2199735cd9a9abd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_hofstadter, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:34:32 np0005544708 podman[249125]: 2025-12-03 21:34:32.224791939 +0000 UTC m=+0.123770766 container attach 3ab8d62106fcfd8e54573db20c20d964ab9ff604d4873f32a2199735cd9a9abd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 16:34:32 np0005544708 podman[249125]: 2025-12-03 21:34:32.131423381 +0000 UTC m=+0.030402228 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:34:32 np0005544708 ceph-mds[93586]: mds.cephfs.compute-0.gzkqle asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec  3 16:34:32 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v886: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:34:32 np0005544708 ceph-mds[93586]: mds.cephfs.compute-0.gzkqle asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]: {
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:    "0": [
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:        {
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:            "devices": [
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "/dev/loop3"
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:            ],
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:            "lv_name": "ceph_lv0",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:            "lv_size": "21470642176",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d33bc95-baf8-481d-bc78-3b15ffd29872,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:            "lv_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:            "name": "ceph_lv0",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:            "tags": {
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.block_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.cluster_name": "ceph",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.crush_device_class": "",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.encrypted": "0",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.objectstore": "bluestore",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.osd_fsid": "4d33bc95-baf8-481d-bc78-3b15ffd29872",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.osd_id": "0",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.type": "block",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.vdo": "0",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.with_tpm": "0"
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:            },
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:            "type": "block",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:            "vg_name": "ceph_vg0"
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:        }
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:    ],
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:    "1": [
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:        {
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:            "devices": [
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "/dev/loop4"
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:            ],
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:            "lv_name": "ceph_lv1",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:            "lv_size": "21470642176",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c4086f1b-ff53-4e63-8dc0-011238d77976,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:            "lv_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:            "name": "ceph_lv1",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:            "tags": {
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.block_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.cluster_name": "ceph",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.crush_device_class": "",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.encrypted": "0",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.objectstore": "bluestore",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.osd_fsid": "c4086f1b-ff53-4e63-8dc0-011238d77976",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.osd_id": "1",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.type": "block",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.vdo": "0",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.with_tpm": "0"
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:            },
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:            "type": "block",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:            "vg_name": "ceph_vg1"
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:        }
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:    ],
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:    "2": [
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:        {
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:            "devices": [
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "/dev/loop5"
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:            ],
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:            "lv_name": "ceph_lv2",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:            "lv_size": "21470642176",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=abcd6a67-9013-4470-978f-f75da5f33cd4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:            "lv_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:            "name": "ceph_lv2",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:            "tags": {
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.block_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.cluster_name": "ceph",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.crush_device_class": "",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.encrypted": "0",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.objectstore": "bluestore",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.osd_fsid": "abcd6a67-9013-4470-978f-f75da5f33cd4",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.osd_id": "2",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.type": "block",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.vdo": "0",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:                "ceph.with_tpm": "0"
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:            },
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:            "type": "block",
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:            "vg_name": "ceph_vg2"
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:        }
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]:    ]
Dec  3 16:34:32 np0005544708 cool_hofstadter[249150]: }
Dec  3 16:34:32 np0005544708 systemd[1]: libpod-3ab8d62106fcfd8e54573db20c20d964ab9ff604d4873f32a2199735cd9a9abd.scope: Deactivated successfully.
Dec  3 16:34:32 np0005544708 podman[249125]: 2025-12-03 21:34:32.530267676 +0000 UTC m=+0.429246503 container died 3ab8d62106fcfd8e54573db20c20d964ab9ff604d4873f32a2199735cd9a9abd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  3 16:34:32 np0005544708 systemd[1]: var-lib-containers-storage-overlay-94b466ca325c8b58e4a11da9187f605a6ecf99a8e073d78ae34ac0df365f9ebb-merged.mount: Deactivated successfully.
Dec  3 16:34:32 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14712 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:34:32 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Dec  3 16:34:32 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1058758739' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec  3 16:34:32 np0005544708 podman[249125]: 2025-12-03 21:34:32.673365561 +0000 UTC m=+0.572344388 container remove 3ab8d62106fcfd8e54573db20c20d964ab9ff604d4873f32a2199735cd9a9abd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_hofstadter, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:34:32 np0005544708 systemd[1]: libpod-conmon-3ab8d62106fcfd8e54573db20c20d964ab9ff604d4873f32a2199735cd9a9abd.scope: Deactivated successfully.
Dec  3 16:34:32 np0005544708 ceph-mds[93586]: mds.cephfs.compute-0.gzkqle asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec  3 16:34:32 np0005544708 ceph-mds[93586]: mds.cephfs.compute-0.gzkqle asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec  3 16:34:33 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:34:33 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3946531935' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:34:33 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14714 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:34:33 np0005544708 ceph-mgr[75500]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  3 16:34:33 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mgr-compute-0-jxauqt[75496]: 2025-12-03T21:34:33.124+0000 7fa8c63a3640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  3 16:34:33 np0005544708 podman[249336]: 2025-12-03 21:34:33.168275828 +0000 UTC m=+0.045012800 container create af869a01dbbcf8e94b3281499498c66ee35219dee089e90b6879398d8f478e4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_almeida, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 16:34:33 np0005544708 ceph-mds[93586]: mds.cephfs.compute-0.gzkqle asok_command: ops {prefix=ops} (starting...)
Dec  3 16:34:33 np0005544708 systemd[1]: Started libpod-conmon-af869a01dbbcf8e94b3281499498c66ee35219dee089e90b6879398d8f478e4a.scope.
Dec  3 16:34:33 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:34:33 np0005544708 podman[249336]: 2025-12-03 21:34:33.23646369 +0000 UTC m=+0.113200702 container init af869a01dbbcf8e94b3281499498c66ee35219dee089e90b6879398d8f478e4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec  3 16:34:33 np0005544708 podman[249336]: 2025-12-03 21:34:33.245462642 +0000 UTC m=+0.122199614 container start af869a01dbbcf8e94b3281499498c66ee35219dee089e90b6879398d8f478e4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_almeida, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  3 16:34:33 np0005544708 laughing_almeida[249361]: 167 167
Dec  3 16:34:33 np0005544708 systemd[1]: libpod-af869a01dbbcf8e94b3281499498c66ee35219dee089e90b6879398d8f478e4a.scope: Deactivated successfully.
Dec  3 16:34:33 np0005544708 podman[249336]: 2025-12-03 21:34:33.249784248 +0000 UTC m=+0.126521240 container attach af869a01dbbcf8e94b3281499498c66ee35219dee089e90b6879398d8f478e4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_almeida, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:34:33 np0005544708 podman[249336]: 2025-12-03 21:34:33.153932963 +0000 UTC m=+0.030669935 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:34:33 np0005544708 podman[249336]: 2025-12-03 21:34:33.250113967 +0000 UTC m=+0.126850939 container died af869a01dbbcf8e94b3281499498c66ee35219dee089e90b6879398d8f478e4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:34:33 np0005544708 systemd[1]: var-lib-containers-storage-overlay-47cb701e3e2ed7a43d276497449a8516bfefdb3e64476e42e85602c8c692f4e5-merged.mount: Deactivated successfully.
Dec  3 16:34:33 np0005544708 podman[249336]: 2025-12-03 21:34:33.291327154 +0000 UTC m=+0.168064126 container remove af869a01dbbcf8e94b3281499498c66ee35219dee089e90b6879398d8f478e4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:34:33 np0005544708 systemd[1]: libpod-conmon-af869a01dbbcf8e94b3281499498c66ee35219dee089e90b6879398d8f478e4a.scope: Deactivated successfully.
Dec  3 16:34:33 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Dec  3 16:34:33 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3377945510' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Dec  3 16:34:33 np0005544708 podman[249427]: 2025-12-03 21:34:33.542430581 +0000 UTC m=+0.075979793 container create 952d7e715170c42b5bade2ce358d31e1acd104754b75d8637601d42c192f28b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_greider, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 16:34:33 np0005544708 systemd[1]: Started libpod-conmon-952d7e715170c42b5bade2ce358d31e1acd104754b75d8637601d42c192f28b4.scope.
Dec  3 16:34:33 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:34:33 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f2b8e759a8684c8020bca8dbbfa8b5421cebe0c0b8a68e8c7bb70f742afed81/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:34:33 np0005544708 podman[249427]: 2025-12-03 21:34:33.517714257 +0000 UTC m=+0.051263549 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:34:33 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f2b8e759a8684c8020bca8dbbfa8b5421cebe0c0b8a68e8c7bb70f742afed81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:34:33 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f2b8e759a8684c8020bca8dbbfa8b5421cebe0c0b8a68e8c7bb70f742afed81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:34:33 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f2b8e759a8684c8020bca8dbbfa8b5421cebe0c0b8a68e8c7bb70f742afed81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:34:33 np0005544708 podman[249427]: 2025-12-03 21:34:33.625099072 +0000 UTC m=+0.158648324 container init 952d7e715170c42b5bade2ce358d31e1acd104754b75d8637601d42c192f28b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_greider, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 16:34:33 np0005544708 podman[249427]: 2025-12-03 21:34:33.630218629 +0000 UTC m=+0.163767831 container start 952d7e715170c42b5bade2ce358d31e1acd104754b75d8637601d42c192f28b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec  3 16:34:33 np0005544708 podman[249427]: 2025-12-03 21:34:33.658027187 +0000 UTC m=+0.191576429 container attach 952d7e715170c42b5bade2ce358d31e1acd104754b75d8637601d42c192f28b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:34:33 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Dec  3 16:34:33 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4204876595' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Dec  3 16:34:33 np0005544708 ceph-mds[93586]: mds.cephfs.compute-0.gzkqle asok_command: session ls {prefix=session ls} (starting...)
Dec  3 16:34:33 np0005544708 ceph-mds[93586]: mds.cephfs.compute-0.gzkqle asok_command: status {prefix=status} (starting...)
Dec  3 16:34:34 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Dec  3 16:34:34 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2722872596' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Dec  3 16:34:34 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec  3 16:34:34 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/377887505' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec  3 16:34:34 np0005544708 lvm[249591]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:34:34 np0005544708 lvm[249593]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:34:34 np0005544708 lvm[249593]: VG ceph_vg1 finished
Dec  3 16:34:34 np0005544708 lvm[249591]: VG ceph_vg0 finished
Dec  3 16:34:34 np0005544708 lvm[249595]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:34:34 np0005544708 lvm[249595]: VG ceph_vg2 finished
Dec  3 16:34:34 np0005544708 lvm[249620]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:34:34 np0005544708 lvm[249620]: VG ceph_vg2 finished
Dec  3 16:34:34 np0005544708 stupefied_greider[249449]: {}
Dec  3 16:34:34 np0005544708 systemd[1]: libpod-952d7e715170c42b5bade2ce358d31e1acd104754b75d8637601d42c192f28b4.scope: Deactivated successfully.
Dec  3 16:34:34 np0005544708 systemd[1]: libpod-952d7e715170c42b5bade2ce358d31e1acd104754b75d8637601d42c192f28b4.scope: Consumed 1.203s CPU time.
Dec  3 16:34:34 np0005544708 podman[249427]: 2025-12-03 21:34:34.370640582 +0000 UTC m=+0.904189814 container died 952d7e715170c42b5bade2ce358d31e1acd104754b75d8637601d42c192f28b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_greider, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  3 16:34:34 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v887: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:34:34 np0005544708 systemd[1]: var-lib-containers-storage-overlay-0f2b8e759a8684c8020bca8dbbfa8b5421cebe0c0b8a68e8c7bb70f742afed81-merged.mount: Deactivated successfully.
Dec  3 16:34:34 np0005544708 podman[249427]: 2025-12-03 21:34:34.413072072 +0000 UTC m=+0.946621274 container remove 952d7e715170c42b5bade2ce358d31e1acd104754b75d8637601d42c192f28b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_greider, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default)
Dec  3 16:34:34 np0005544708 systemd[1]: libpod-conmon-952d7e715170c42b5bade2ce358d31e1acd104754b75d8637601d42c192f28b4.scope: Deactivated successfully.
Dec  3 16:34:34 np0005544708 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 16:34:34 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:34:34 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:34:34 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:34:34 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:34:34 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14726 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:34:34 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec  3 16:34:34 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3227831888' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec  3 16:34:34 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:34:34 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:34:35 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14730 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:34:35 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec  3 16:34:35 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/486783745' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec  3 16:34:35 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Dec  3 16:34:35 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3163567038' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec  3 16:34:35 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  3 16:34:35 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3009857062' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec  3 16:34:36 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Dec  3 16:34:36 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/107846304' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Dec  3 16:34:36 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec  3 16:34:36 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3980611023' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec  3 16:34:36 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:34:36 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v888: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:34:36 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14742 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:34:36 np0005544708 ceph-mgr[75500]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  3 16:34:36 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mgr-compute-0-jxauqt[75496]: 2025-12-03T21:34:36.674+0000 7fa8c63a3640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  3 16:34:36 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec  3 16:34:36 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4143601676' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec  3 16:34:37 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Dec  3 16:34:37 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2866941577' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Dec  3 16:34:37 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14748 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.3( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.3( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.5( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.810041 4 0.000108
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.3( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.5( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.5( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.5( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.5( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.3( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000011 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.5( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.5( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.3( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.5( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.5( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000012 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.5( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.3( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000020 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.3( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.3( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.3( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000063 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.5( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000024 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.3( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.5( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.5( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.5( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000056 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.5( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.4( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.810186 4 0.000042
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.4( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.4( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.4( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.4( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.2( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.810735 4 0.000043
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.4( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.4( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.2( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.809542 4 0.000026
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.4( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.2( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.2( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.2( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000015 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.4( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000018 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.f( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.809662 4 0.000027
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.2( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.4( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.6( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000208 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.f( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.2( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000008 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.f( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.2( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.6( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.f( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.f( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000013 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.f( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.f( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.4( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000034 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.f( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000041 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.4( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.f( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000011 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.4( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.6( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000037 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.4( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000085 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.2( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000042 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.f( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.6( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.2( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.4( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.6( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.4( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.6( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000301 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.f( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000023 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.2( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000023 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.f( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.6( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.f( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.2( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.e( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.809864 4 0.000029
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.f( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000069 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.e( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.e( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.2( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.f( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.e( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.e( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.6( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.e( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.2( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000109 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.e( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.e( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.2( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.e( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000008 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.2( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.e( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.c( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.810131 4 0.000026
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.c( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.c( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.c( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.c( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.c( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.c( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1b( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.809629 4 0.000031
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.c( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.e( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000075 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.e( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.d( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.810149 4 0.000085
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.e( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.c( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000014 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.e( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000120 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.c( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.e( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.c( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000016 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.d( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.c( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1a( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.809492 4 0.000028
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.d( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.c( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1a( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.d( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.c( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000062 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1a( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.d( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.c( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.d( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1a( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1b( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.d( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1a( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000016 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.d( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1b( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1a( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1b( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1a( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.d( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000014 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1a( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.d( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1b( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000018 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.19( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.809598 4 0.000064
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1b( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.19( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.19( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.19( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1b( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1a( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000019 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.d( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000020 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.19( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.19( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1b( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1a( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.d( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.19( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.19( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.d( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.18( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.809410 4 0.000028
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.d( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000071 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.19( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000010 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.18( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.d( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.19( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.18( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1b( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000027 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.18( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1a( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000029 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.18( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.19( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000012 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1b( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1a( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.18( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.19( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.19( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.18( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1a( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.19( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000050 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.18( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.19( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1a( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000098 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.19( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1a( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1b( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000046 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1b( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1b( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1b( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000118 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.18( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000055 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.18( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.18( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000021 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.18( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.18( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.18( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000109 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.18( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1b( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002378 3 0.000239
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002410 3 0.000191
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.10( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.14( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.13( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.17( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.8( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.0( empty local-lis/les=37/38 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005498 3 0.000263
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.4( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.6( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005460 3 0.000103
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.10( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005483 3 0.000160
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.14( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005351 3 0.000106
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.14( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.10( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.14( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.14( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.10( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000053 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.10( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005468 3 0.000229
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.2( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.19( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.13( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005564 3 0.000154
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.13( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.13( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.13( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005525 3 0.000164
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005354 3 0.000140
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.17( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005378 3 0.000207
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.17( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.17( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005408 3 0.000184
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.17( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005259 3 0.000123
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005262 3 0.000414
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.8( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005374 3 0.000294
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.8( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.8( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.8( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.0( empty local-lis/les=37/38 n=0 ec=19/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005254 3 0.000055
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005212 3 0.000150
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.0( empty local-lis/les=37/38 n=0 ec=19/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.0( empty local-lis/les=37/38 n=0 ec=19/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.0( empty local-lis/les=37/38 n=0 ec=19/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005248 3 0.000148
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005120 3 0.000127
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005119 3 0.000121
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005014 3 0.000101
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004964 3 0.000146
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.4( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005005 3 0.000171
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.4( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.4( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.6( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004998 3 0.000420
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.4( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.6( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.6( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.6( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.2( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005003 3 0.000214
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.2( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004896 3 0.000151
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004770 3 0.000187
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.2( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000042 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004908 3 0.000190
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.2( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.19( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004842 3 0.000109
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004828 3 0.000196
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.19( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.19( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.19( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004752 3 0.000246
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004704 3 0.000434
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.1b( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.001439 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 38 pg[5.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/19 les/c/f=38/20/0 sis=37) [2] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 59006976 unmapped: 1802240 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 314347 data_alloc: 218103808 data_used: 0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 38 handle_osd_map epochs [38,39], i have 38, src has [1,39]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 39 heartbeat osd_stat(store_statfs(0x4fe168000/0x0/0x4ffc00000, data 0x26b89/0x62000, compress 0x0/0x0/0x0, omap 0x45ed, meta 0x1a2ba13), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 59047936 unmapped: 1761280 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 59129856 unmapped: 1679360 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.b scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.b scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 39 handle_osd_map epochs [40,40], i have 39, src has [1,40]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 59187200 unmapped: 1622016 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 59187200 unmapped: 1622016 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 40 heartbeat osd_stat(store_statfs(0x4fe162000/0x0/0x4ffc00000, data 0x295f3/0x68000, compress 0x0/0x0/0x0, omap 0x4b03, meta 0x1a2b4fd), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 59228160 unmapped: 1581056 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 324715 data_alloc: 218103808 data_used: 0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 59228160 unmapped: 1581056 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 59219968 unmapped: 1589248 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.410965919s of 14.511530876s, submitted: 209
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 59260928 unmapped: 1548288 heap: 60809216 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 40 heartbeat osd_stat(store_statfs(0x4fe162000/0x0/0x4ffc00000, data 0x295f3/0x68000, compress 0x0/0x0/0x0, omap 0x4b03, meta 0x1a2b4fd), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 40 handle_osd_map epochs [41,41], i have 40, src has [1,41]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.18(unlocked)] enter Initial
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.18( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=0 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000101 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.18( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=0 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.18( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000011 1 0.000026
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.18( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.18( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.18( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.18( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.18( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.18( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.18( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.18( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000152 1 0.000041
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.18( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1b(unlocked)] enter Initial
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1b( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=0 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000079 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1b( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=0 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1b( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000011 1 0.000026
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1b( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1b( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1b( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1b( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1b( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1b( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1b( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1b( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000107 1 0.000033
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1b( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1a(unlocked)] enter Initial
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=0 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000086 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=0 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000025
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000090 1 0.000056
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.e(unlocked)] enter Initial
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.e( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=0 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000075 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.e( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=0 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.e( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000018
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.e( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.e( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.e( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.e( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.e( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.e( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.e( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.e( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000053 1 0.000041
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.e( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1(unlocked)] enter Initial
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=0 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000043 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=0 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000012
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000073 1 0.000045
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.a(unlocked)] enter Initial
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=0 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000075 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=0 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000039
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000055 1 0.000034
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.a( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.13(unlocked)] enter Initial
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.13( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=0 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000052 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.13( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=0 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.13( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000015
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.13( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.13( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.13( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.13( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.13( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.13( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.13( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.13( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000073 1 0.000035
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.13( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.11(unlocked)] enter Initial
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.11( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=0 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000056 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.11( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=0 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.11( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000025
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.11( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.11( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.11( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.11( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.11( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.11( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.11( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.11( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000065 1 0.000033
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.11( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1c(unlocked)] enter Initial
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1c( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=0 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000071 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1c( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=0 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1c( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000021
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1c( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1c( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1c( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1c( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1c( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1c( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1c( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1c( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000048 1 0.000032
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1c( empty local-lis/les=0/0 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.115552 8 0.000099
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.120882 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.121216 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 9.121254 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.119262 8 0.000113
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.121750 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.121835 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.884406090s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 active pruub 93.348602295s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 9.121877 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.884387016s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348602295s@ mbc={}] exit Reset 0.000063 1 0.000104
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.884387016s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348602295s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880822182s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 active pruub 93.345054626s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.884387016s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348602295s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.884387016s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348602295s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.884387016s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348602295s@ mbc={}] exit Start 0.000016 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.884387016s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348602295s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880803108s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.345054626s@ mbc={}] exit Reset 0.000054 1 0.000081
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880803108s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.345054626s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880803108s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.345054626s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880803108s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.345054626s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880803108s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.345054626s@ mbc={}] exit Start 0.000007 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880803108s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.345054626s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.146308 20 0.000142
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.152461 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.152568 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.152598 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.853688240s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.318038940s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.853674889s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.318038940s@ mbc={}] exit Reset 0.000028 1 0.000049
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.853674889s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.318038940s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.853674889s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.318038940s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.853674889s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.318038940s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.853674889s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.318038940s@ mbc={}] exit Start 0.000007 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.853674889s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.318038940s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.146129 20 0.000180
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.152608 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.152722 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.152793 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.853358269s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.318038940s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.853232384s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.318038940s@ mbc={}] exit Reset 0.000201 1 0.000683
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.853232384s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.318038940s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.853232384s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.318038940s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.853232384s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.318038940s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.853232384s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.318038940s@ mbc={}] exit Start 0.000109 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.853232384s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.318038940s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.147707 20 0.000121
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.153889 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.153949 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.153976 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.852234840s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.317932129s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.852218628s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317932129s@ mbc={}] exit Reset 0.000038 1 0.000066
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.852218628s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317932129s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.852218628s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317932129s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.852218628s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317932129s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.852218628s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317932129s@ mbc={}] exit Start 0.000005 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.852218628s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317932129s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.18( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.004026 2 0.000098
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.18( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.18( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.18( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.18(unlocked)] enter Initial
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.18( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=0 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000056 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.18( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=0 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.18( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.18( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.18( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.18( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.18( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.18( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.18( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.18( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.18( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000055 1 0.000027
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.18( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.148126 20 0.000089
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.155164 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.155232 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.155257 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.851003647s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.317916870s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.850990295s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317916870s@ mbc={}] exit Reset 0.000026 1 0.000043
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.850990295s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317916870s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.850990295s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317916870s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.850990295s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317916870s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.850990295s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317916870s@ mbc={}] exit Start 0.000006 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.850990295s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317916870s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.149078 20 0.000091
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.155274 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.155377 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.155407 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.118720 8 0.000051
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.124235 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.124365 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 9.124406 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.881216049s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 active pruub 93.348243713s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.881206512s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348243713s@ mbc={}] exit Reset 0.000021 1 0.000039
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.881206512s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348243713s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.881206512s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348243713s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.881206512s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348243713s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.881206512s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348243713s@ mbc={}] exit Start 0.000005 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.850802422s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.317840576s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.881206512s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348243713s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.850778580s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317840576s@ mbc={}] exit Reset 0.000069 1 0.000111
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.850778580s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317840576s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.149121 20 0.000290
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.155552 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.850778580s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317840576s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.155607 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.850778580s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317840576s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.155628 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.850778580s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317840576s@ mbc={}] exit Start 0.000011 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.850778580s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317840576s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 41 handle_osd_map epochs [41,41], i have 41, src has [1,41]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.850716591s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.317825317s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.119094 8 0.000067
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.124591 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.124651 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 9.124677 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.850179672s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317825317s@ mbc={}] exit Reset 0.000552 1 0.000564
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.850179672s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317825317s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.850179672s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317825317s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.850179672s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317825317s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.850179672s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317825317s@ mbc={}] exit Start 0.000005 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.850179672s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317825317s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.119291 8 0.000082
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.124905 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880301476s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 active pruub 93.348007202s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.124999 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.149840 20 0.000082
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.156047 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.156240 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 9.125034 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.156287 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880270004s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348007202s@ mbc={}] exit Reset 0.000434 1 0.000469
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880270004s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348007202s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880270004s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348007202s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.850170135s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.317924500s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880270004s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348007202s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880270004s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348007202s@ mbc={}] exit Start 0.000011 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880525589s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 active pruub 93.348281860s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.850155830s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317924500s@ mbc={}] exit Reset 0.000029 1 0.000051
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880270004s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348007202s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.850155830s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317924500s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.850155830s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317924500s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.850155830s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317924500s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.850155830s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317924500s@ mbc={}] exit Start 0.000005 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.850155830s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317924500s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880499840s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348281860s@ mbc={}] exit Reset 0.000054 1 0.000091
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880499840s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348281860s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880499840s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348281860s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880499840s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348281860s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880499840s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348281860s@ mbc={}] exit Start 0.000011 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880499840s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348281860s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.119609 8 0.000094
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.125026 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.125081 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 9.125103 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880378723s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 active pruub 93.348236084s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880367279s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348236084s@ mbc={}] exit Reset 0.000024 1 0.000043
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880367279s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348236084s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880367279s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348236084s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880367279s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348236084s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880367279s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348236084s@ mbc={}] exit Start 0.000004 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880367279s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348236084s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.150056 20 0.000106
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.156274 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.156365 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.119423 8 0.000059
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.124898 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.124997 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.156416 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 9.125020 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.119496 8 0.000055
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.125068 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880339622s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 active pruub 93.348297119s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.125150 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 9.125211 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.849840164s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.317825317s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.849815369s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317825317s@ mbc={}] exit Reset 0.000059 1 0.000094
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880279541s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 active pruub 93.348289490s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.849815369s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317825317s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.849815369s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317825317s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.849815369s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317825317s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.849815369s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317825317s@ mbc={}] exit Start 0.000010 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880253792s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348289490s@ mbc={}] exit Reset 0.000053 1 0.000088
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.849815369s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317825317s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880253792s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348289490s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880253792s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348289490s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880253792s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348289490s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880253792s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348289490s@ mbc={}] exit Start 0.000010 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880223274s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348297119s@ mbc={}] exit Reset 0.000131 1 0.000145
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880253792s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348289490s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880223274s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348297119s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880223274s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348297119s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880223274s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348297119s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880223274s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348297119s@ mbc={}] exit Start 0.000005 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880223274s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348297119s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.150997 20 0.000129
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.156847 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.156928 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.156957 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.849054337s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.317298889s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.119778 8 0.000076
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.125187 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.125270 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.849026680s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317298889s@ mbc={}] exit Reset 0.000053 1 0.000113
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 9.125302 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.151138 20 0.000072
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.156988 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.849026680s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317298889s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.157140 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.849026680s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317298889s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.157173 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.849026680s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317298889s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.849026680s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317298889s@ mbc={}] exit Start 0.000013 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.849026680s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317298889s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848933220s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.317253113s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880013466s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 active pruub 93.348335266s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848918915s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317253113s@ mbc={}] exit Reset 0.000031 1 0.000056
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848918915s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317253113s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848918915s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317253113s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848918915s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317253113s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848918915s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317253113s@ mbc={}] exit Start 0.000004 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848918915s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317253113s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879985809s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348335266s@ mbc={}] exit Reset 0.000077 1 0.000092
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879985809s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348335266s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879985809s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348335266s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879985809s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348335266s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879985809s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348335266s@ mbc={}] exit Start 0.000010 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879985809s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348335266s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.119862 8 0.000060
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.125151 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.125235 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 9.125271 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880131721s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 active pruub 93.348648071s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.151401 20 0.000075
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.157373 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880120277s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348648071s@ mbc={}] exit Reset 0.000027 1 0.000046
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880120277s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348648071s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.157535 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880120277s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348648071s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880120277s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348648071s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.157567 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880120277s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348648071s@ mbc={}] exit Start 0.000017 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880120277s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348648071s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.151460 20 0.000078
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.157479 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.157590 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848572731s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.317153931s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.157659 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.119916 8 0.000062
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848547935s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317153931s@ mbc={}] exit Reset 0.000051 1 0.000088
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.125071 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848547935s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317153931s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.125141 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848547935s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317153931s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 9.125165 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848423004s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.317047119s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848547935s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317153931s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848547935s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317153931s@ mbc={}] exit Start 0.000013 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848547935s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317153931s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.880009651s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 active pruub 93.348655701s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879998207s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348655701s@ mbc={}] exit Reset 0.000030 1 0.000051
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879998207s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348655701s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879998207s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348655701s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879998207s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348655701s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879998207s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348655701s@ mbc={}] exit Start 0.000004 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879998207s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348655701s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848355293s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317047119s@ mbc={}] exit Reset 0.000093 1 0.000115
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848355293s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317047119s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.151656 20 0.000068
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848355293s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317047119s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.157792 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848355293s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317047119s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848355293s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317047119s@ mbc={}] exit Start 0.000012 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.119986 8 0.000072
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.157863 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.125042 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848355293s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317047119s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.125146 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 9.125177 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.157903 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879922867s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 active pruub 93.348686218s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879914284s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348686218s@ mbc={}] exit Reset 0.000027 1 0.000046
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879914284s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348686218s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848220825s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.317001343s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879914284s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348686218s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879914284s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348686218s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879914284s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348686218s@ mbc={}] exit Start 0.000005 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879914284s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348686218s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848196983s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317001343s@ mbc={}] exit Reset 0.000051 1 0.000092
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.151698 20 0.000161
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848196983s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317001343s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.157830 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848196983s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317001343s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.157980 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848196983s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317001343s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.120154 8 0.000054
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.125306 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.125383 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.158028 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 9.125408 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848196983s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317001343s@ mbc={}] exit Start 0.000012 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879804611s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 active pruub 93.348670959s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848196983s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317001343s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879792213s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348670959s@ mbc={}] exit Reset 0.000025 1 0.000045
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848138809s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.317008972s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879792213s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348670959s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879792213s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348670959s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879792213s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348670959s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879792213s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348670959s@ mbc={}] exit Start 0.000004 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879792213s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348670959s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848105431s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317008972s@ mbc={}] exit Reset 0.000064 1 0.000096
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.151871 20 0.000076
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.158083 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.158180 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848105431s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317008972s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.158200 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848105431s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317008972s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848105431s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317008972s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.120118 8 0.000097
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848002434s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.316963196s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848105431s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317008972s@ mbc={}] exit Start 0.000017 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.125213 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.848105431s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.317008972s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847991943s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316963196s@ mbc={}] exit Reset 0.000026 1 0.000043
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.125350 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847991943s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316963196s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847991943s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316963196s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847991943s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316963196s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847991943s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316963196s@ mbc={}] exit Start 0.000004 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 9.125391 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847991943s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316963196s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879694939s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 active pruub 93.348716736s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879670143s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348716736s@ mbc={}] exit Reset 0.000052 1 0.000091
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879670143s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348716736s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879670143s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348716736s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879670143s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348716736s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879670143s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348716736s@ mbc={}] exit Start 0.000007 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879670143s) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348716736s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.120355 8 0.000076
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.125417 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.125470 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 9.125494 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879558563s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 active pruub 93.348678589s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.152116 20 0.000097
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879547119s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348678589s@ mbc={}] exit Reset 0.000025 1 0.000043
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.158314 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879547119s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348678589s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879547119s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348678589s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879547119s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348678589s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.158419 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879547119s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348678589s@ mbc={}] exit Start 0.000006 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879547119s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348678589s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.158459 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.152080 20 0.000071
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.120461 8 0.000049
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.125467 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.158444 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.125561 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 9.125591 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847544670s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.316764832s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.158564 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.158602 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879440308s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 active pruub 93.348686218s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847513199s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316764832s@ mbc={}] exit Reset 0.000090 1 0.000132
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879426003s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348686218s@ mbc={}] exit Reset 0.000032 1 0.000061
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847513199s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316764832s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879426003s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348686218s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879426003s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348686218s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847513199s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316764832s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879426003s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348686218s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879426003s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348686218s@ mbc={}] exit Start 0.000005 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847513199s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316764832s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879426003s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348686218s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847513199s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316764832s@ mbc={}] exit Start 0.000012 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847468376s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.316741943s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847513199s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316764832s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.152949 20 0.000124
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847443581s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316741943s@ mbc={}] exit Reset 0.000052 1 0.000131
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.158731 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.158794 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847443581s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316741943s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.158818 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847443581s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316741943s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847443581s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316741943s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847443581s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316741943s@ mbc={}] exit Start 0.000013 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847066879s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.316398621s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847443581s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316741943s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847057343s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316398621s@ mbc={}] exit Reset 0.000034 1 0.000041
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847057343s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316398621s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847057343s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316398621s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847057343s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316398621s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847057343s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316398621s@ mbc={}] exit Start 0.000005 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847057343s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316398621s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.153036 20 0.000100
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.158868 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.158923 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.153120 20 0.000102
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.158876 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.159051 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.158955 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.159087 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.120597 8 0.000056
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.125538 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.846938133s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.316390991s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.125617 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.846924782s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316390991s@ mbc={}] exit Reset 0.000026 1 0.000045
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.846894264s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.316352844s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.846924782s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316390991s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 9.125650 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.846924782s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316390991s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.846924782s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316390991s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.846924782s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316390991s@ mbc={}] exit Start 0.000006 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.846924782s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316390991s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.846867561s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316352844s@ mbc={}] exit Reset 0.000054 1 0.000091
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.846867561s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316352844s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.846867561s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316352844s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879432678s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 active pruub 93.348930359s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.846867561s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316352844s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.846867561s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316352844s@ mbc={}] exit Start 0.000012 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.846867561s) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316352844s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879405975s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348930359s@ mbc={}] exit Reset 0.000053 1 0.000086
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.153210 20 0.000129
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.159117 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.159233 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.159264 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879405975s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348930359s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879405975s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348930359s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879405975s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348930359s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879405975s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348930359s@ mbc={}] exit Start 0.000011 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879405975s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348930359s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.156262 20 0.000153
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.159238 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.159391 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.159423 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.120737 8 0.000053
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.125609 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.125730 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.843698502s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.313354492s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 9.125776 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.843666077s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.313354492s@ mbc={}] exit Reset 0.000058 1 0.000090
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.843666077s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.313354492s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879385948s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 active pruub 93.349082947s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.843666077s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.313354492s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.843666077s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.313354492s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.843666077s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.313354492s@ mbc={}] exit Start 0.000011 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.843666077s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.313354492s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.846693993s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.316413879s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879362106s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.349082947s@ mbc={}] exit Reset 0.000047 1 0.000078
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879362106s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.349082947s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879362106s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.349082947s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.846668243s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316413879s@ mbc={}] exit Reset 0.000195 1 0.000209
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879362106s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.349082947s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.846668243s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316413879s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.846668243s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316413879s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879362106s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.349082947s@ mbc={}] exit Start 0.000011 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.846668243s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316413879s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.846668243s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316413879s@ mbc={}] exit Start 0.000005 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879362106s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.349082947s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.846668243s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316413879s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.120910 8 0.000049
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.125786 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.125849 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 9.125869 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 13.152763 20 0.000132
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 13.158981 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879122734s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 active pruub 93.348976135s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 13.159213 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879112244s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348976135s@ mbc={}] exit Reset 0.000029 1 0.000047
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879112244s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348976135s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879112244s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348976135s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879112244s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348976135s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] exit Started 13.159245 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879112244s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348976135s@ mbc={}] exit Start 0.000005 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879112244s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.348976135s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=33) [2] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847078323s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 active pruub 89.316993713s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 9.119542 8 0.001503
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 9.125771 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 9.125910 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 9.125936 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847055435s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316993713s@ mbc={}] exit Reset 0.000054 1 0.000087
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847055435s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316993713s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847055435s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316993713s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879151344s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 active pruub 93.349098206s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847055435s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316993713s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879140854s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.349098206s@ mbc={}] exit Reset 0.000023 1 0.000039
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847055435s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316993713s@ mbc={}] exit Start 0.000010 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879140854s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.349098206s@ mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879140854s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.349098206s@ mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=10.847055435s) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY pruub 89.316993713s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879140854s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.349098206s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879140854s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.349098206s@ mbc={}] exit Start 0.000006 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41 pruub=14.879140854s) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY pruub 93.349098206s@ mbc={}] enter Started/Stray
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1c(unlocked)] enter Initial
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=0 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000039 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=0 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000008
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000062 1 0.000030
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1b( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008148 2 0.000036
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1b( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1b( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1b( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.16(unlocked)] enter Initial
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.16( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=0 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000083 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.16( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=0 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.16( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000019
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.16( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.16( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.16( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.16( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000019 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.16( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.16( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.16( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.16( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000100 1 0.000062
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.16( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.11(unlocked)] enter Initial
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.11( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=0 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000046 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.11( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=0 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.11( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.11( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.11( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.11( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.11( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.11( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.11( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.11( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.11( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000056 1 0.000031
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.11( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.11(unlocked)] enter Initial
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.11( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=0 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000029 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.11( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=0 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.11( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.11( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.11( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.11( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.11( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.11( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.11( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.11( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.11( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000036 1 0.000024
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.11( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.15(unlocked)] enter Initial
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.15( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=0 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000029 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.15( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=0 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.15( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.15( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.15( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.15( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.15( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.15( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.15( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.15( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.15( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000060 1 0.000023
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.15( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.e(unlocked)] enter Initial
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.e( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=0 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000027 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.e( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=0 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.e( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000011 1 0.000014
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.e( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.e( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.e( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.e( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.e( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.e( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.e( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.e( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000054 1 0.000024
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.e( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.a(unlocked)] enter Initial
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=0 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000027 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=0 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000040 1 0.000022
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.8(unlocked)] enter Initial
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=0 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000025 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=0 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000032 1 0.000022
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.8( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.5(unlocked)] enter Initial
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=0 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000027 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=0 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000006
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000034 1 0.000021
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.5( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.5(unlocked)] enter Initial
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.5( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=0 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000038 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.5( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=0 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.5( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.5( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.5( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.5( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.5( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.5( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.5( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.5( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.5( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000061 1 0.000025
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.5( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1(unlocked)] enter Initial
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=0 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000032 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=0 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000009
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000044 1 0.000024
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.2(unlocked)] enter Initial
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=0 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000075 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=0 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000011 1 0.000017
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000114 1 0.000036
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.2( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.7(unlocked)] enter Initial
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.7( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=0 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000051 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.7( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=0 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.7( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.7( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.7( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.7( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.7( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.7( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.7( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.7( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.7( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000051 1 0.000025
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.7( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.8(unlocked)] enter Initial
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.8( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=0 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000035 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.8( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=0 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.8( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000008
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.8( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.8( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.8( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.8( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.8( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.8( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.8( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.8( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000051 1 0.000030
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.8( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.c(unlocked)] enter Initial
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=0 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000065 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=0 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000021
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000056 1 0.000035
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.e(unlocked)] enter Initial
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=0 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000035 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=0 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000009
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000048 1 0.000022
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.1d(unlocked)] enter Initial
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.1d( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=0 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000131 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.1d( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=0 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.1d( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000040
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.1d( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.1d( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.1d( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.1d( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.1d( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.1d( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.1d( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.1d( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000063 1 0.000041
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.1d( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.1e(unlocked)] enter Initial
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.1e( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=0 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000051 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.1e( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=0 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.1e( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000010
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.1e( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.1e( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.1e( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.1e( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.1e( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.1e( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.1e( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.1e( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000052 1 0.000028
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.1e( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1a(unlocked)] enter Initial
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=0 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000093 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=0 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000023
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000061 1 0.000037
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1a( empty local-lis/les=0/0 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008435 2 0.000021
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.001248 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1a( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.019787 2 0.000053
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1a( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1a( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1a( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.e( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.019538 2 0.000028
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.e( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.e( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.e( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.019377 2 0.000044
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.13( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.018921 2 0.000038
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.13( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.13( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.13( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.11( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.018665 2 0.000033
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.11( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.11( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.11( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1c( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.018447 2 0.000041
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1c( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1c( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.1c( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.a( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.019925 2 0.000027
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.a( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.a( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[4.a( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.020758 2 0.000017
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.021125 2 0.000018
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.021272 2 0.000017
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000022 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.020637 2 0.000015
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.021035 2 0.000018
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.019937 2 0.000019
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.019753 2 0.000029
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.019395 2 0.000045
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.020828 2 0.000016
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.018935 2 0.000031
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.018672 2 0.000040
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.018521 2 0.000019
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.018175 2 0.000041
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.017610 2 0.000039
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.021902 2 0.000047
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.019298 2 0.000022
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.021733 2 0.000022
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.022705 2 0.000038
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000025 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.018571 2 0.000025
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 41 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 60792832 unmapped: 1064960 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 41 handle_osd_map epochs [41,42], i have 41, src has [1,42]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 41 handle_osd_map epochs [42,42], i have 42, src has [1,42]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993374 2 0.000040
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005419 2 0.000045
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.013315 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.013728 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.009881 2 0.000039
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.014236 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993517 2 0.000026
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.013687 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993717 2 0.000062
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 42 handle_osd_map epochs [42,42], i have 42, src has [1,42]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992883 2 0.000083
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.013489 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.994405 2 0.000032
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.013179 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.013965 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.994470 2 0.000024
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.013048 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.984112 2 0.000933
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007033 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000753 2 0.001300
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010552 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.984730 2 0.000048
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006785 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.984862 2 0.000053
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006700 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.985435 2 0.000060
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006682 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.985559 2 0.000255
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006948 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.985789 2 0.000044
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006637 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.985694 2 0.000166
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006832 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.985804 2 0.000020
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006699 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.986019 2 0.000022
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006729 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.986004 2 0.000027
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.005575 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.986242 2 0.000020
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006086 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.986228 2 0.000026
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.005620 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.986447 2 0.000016
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.005218 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.986509 2 0.000029
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.005541 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.986698 2 0.000025
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.005314 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.986698 2 0.000029
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004983 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.994808 2 0.000024
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.986955 2 0.000018
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.016126 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004664 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.986443 2 0.000038
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.005117 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.986270 2 0.000032
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007492 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004449 4 0.000190
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007700 4 0.001052
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000014 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006785 4 0.000692
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007729 4 0.000958
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006802 4 0.000124
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006724 4 0.001055
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006708 4 0.000134
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=41/35 les/c/f=42/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006498 4 0.000091
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=41/35 les/c/f=42/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=41/35 les/c/f=42/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=41/35 les/c/f=42/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006553 4 0.000192
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=41/35 les/c/f=42/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006427 4 0.000067
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=41/35 les/c/f=42/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000016 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006307 4 0.000066
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=41/35 les/c/f=42/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=41/35 les/c/f=42/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007182 4 0.001376
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018497 7 0.000036
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000086 1 0.000045
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018430 7 0.000036
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018244 7 0.000085
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000102 1 0.000355
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000133 1 0.000397
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009990 4 0.000074
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=41/35 les/c/f=42/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010139 4 0.000131
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=41/35 les/c/f=42/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010294 4 0.000095
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=41/35 les/c/f=42/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=41/35 les/c/f=42/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=41/35 les/c/f=42/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009930 4 0.000153
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=41/35 les/c/f=42/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=41/35 les/c/f=42/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000024 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=41/35 les/c/f=42/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009880 4 0.000069
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009792 4 0.000068
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009734 4 0.000106
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009949 4 0.000128
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=41/35 les/c/f=42/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009596 4 0.000099
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=41/35 les/c/f=42/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=41/35 les/c/f=42/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=41/35 les/c/f=42/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009526 4 0.000080
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=41/35 les/c/f=42/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009241 4 0.000083
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=41/35 les/c/f=42/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=41/35 les/c/f=42/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=41/35 les/c/f=42/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008990 4 0.000068
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=41/35 les/c/f=42/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009526 4 0.000099
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=41/35 les/c/f=42/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=41/35 les/c/f=42/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=41/35 les/c/f=42/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009010 4 0.002393
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=41/35 les/c/f=42/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008910 4 0.000089
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=41/35 les/c/f=42/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=41/35 les/c/f=42/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=41/35 les/c/f=42/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=41/35 les/c/f=42/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008760 4 0.001266
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=41/35 les/c/f=42/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=41/35 les/c/f=42/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=41/42 n=0 ec=35/17 lis/c=41/35 les/c/f=42/36/0 sis=41) [2] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009568 4 0.000120
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=41/42 n=0 ec=39/23 lis/c=41/39 les/c/f=42/40/0 sis=41) [2] r=0 lpr=41 pi=[39,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.020983 7 0.000055
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000126 1 0.000029
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.023549 7 0.000042
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.025525 7 0.000371
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.023701 7 0.000045
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.023006 7 0.000059
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022872 7 0.000068
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022320 7 0.000079
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022954 7 0.000075
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022637 7 0.000049
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022391 7 0.000042
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000136 1 0.000040
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000185 1 0.000029
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000243 1 0.000016
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000349 1 0.000014
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000414 1 0.000023
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000502 1 0.000016
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000599 1 0.000017
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000697 1 0.000017
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000795 1 0.000015
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.028145 7 0.000055
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000104 1 0.000087
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029354 7 0.000093
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029254 7 0.000072
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.028701 7 0.000061
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029388 7 0.000094
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000134 1 0.000046
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029333 7 0.000039
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029867 7 0.000064
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.028649 7 0.000042
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029255 7 0.000668
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029277 7 0.000044
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000307 1 0.000056
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.028829 7 0.000080
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029039 7 0.000099
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.034380 7 0.000065
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029279 7 0.000051
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000406 1 0.000044
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.028691 7 0.000038
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000355 1 0.000021
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.13( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 DELETING pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.012394 1 0.000025
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.13( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.012500 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.13( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.031035 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000325 1 0.000023
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000427 1 0.000015
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000572 1 0.000016
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029921 7 0.000078
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029300 7 0.000072
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000704 1 0.000021
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.031004 7 0.000062
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.032218 7 0.000096
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030653 7 0.000035
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029598 7 0.000069
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030761 7 0.000057
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030602 7 0.000038
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000962 1 0.000045
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.031560 7 0.000038
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.031802 7 0.000065
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.034694 7 0.000062
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030741 7 0.000078
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.036090 7 0.000049
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001882 1 0.000033
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.031310 7 0.000049
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.036399 7 0.000066
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.002122 1 0.000019
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.002177 1 0.000034
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.002247 1 0.000017
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.002285 1 0.000036
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001743 1 0.000044
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001812 1 0.000065
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001775 1 0.000165
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001766 1 0.000028
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001844 1 0.000071
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001875 1 0.000981
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001210 1 0.001004
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001177 1 0.000044
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001257 1 0.000085
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001354 1 0.001077
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001413 1 0.000042
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001445 1 0.000018
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001500 1 0.000035
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001367 1 0.000313
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001385 1 0.000057
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.14( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 DELETING pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.017415 1 0.000054
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.14( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.017598 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.14( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.036357 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.15( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 DELETING pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.020853 1 0.000028
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.15( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.021033 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.15( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.039683 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.11( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 DELETING pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.025814 1 0.000088
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.11( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.026023 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.11( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.047044 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 42 heartbeat osd_stat(store_statfs(0x4fe15f000/0x0/0x4ffc00000, data 0x2b2c7/0x6b000, compress 0x0/0x0/0x0, omap 0x4d8e, meta 0x1a2b272), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.11( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 DELETING pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.031684 1 0.000030
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.11( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.031853 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.11( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.055439 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.1b( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 DELETING pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.038941 1 0.000042
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.1b( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.039150 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.1b( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.064914 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.17( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 DELETING pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.046211 1 0.000047
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.17( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.046496 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.17( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.070219 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.15( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 DELETING pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.053627 1 0.000024
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.15( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.054019 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.15( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.077046 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.13( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 DELETING pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.060944 1 0.000021
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.13( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.061387 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.13( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.084293 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.9( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 DELETING pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.068529 1 0.000024
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.9( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.069073 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.9( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.091430 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.12( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 DELETING pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.075563 1 0.000025
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.12( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.076262 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.12( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.099262 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.16( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 DELETING pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.082819 1 0.000020
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.16( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.083545 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.16( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.106206 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.d( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 DELETING pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.090008 1 0.000025
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.d( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.090833 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.d( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.113245 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.b( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 DELETING pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.091147 1 0.000053
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.b( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.091303 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.b( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.119504 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.3( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 DELETING pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.098093 1 0.000025
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.3( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.098255 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.3( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.127688 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.a( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 DELETING pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.105365 1 0.000022
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.a( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.105705 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.a( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.134473 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.5( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 DELETING pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.112625 1 0.000027
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.5( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.113070 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.5( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.142357 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.4( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 DELETING pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.119875 1 0.000024
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.4( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.120258 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.4( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.149711 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.1( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 DELETING pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.127445 1 0.000078
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.1( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.127831 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.1( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.157203 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.19( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 DELETING pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.134823 1 0.000036
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.19( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.135443 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.19( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.164116 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.7( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 DELETING pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.142319 1 0.000127
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.7( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.142810 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.7( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.172716 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.f( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 DELETING pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.149099 1 0.000072
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.f( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.149875 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.f( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.179178 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.6( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 DELETING pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.156347 1 0.000045
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.6( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.157353 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.6( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.186656 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.1a( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 DELETING pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.162863 1 0.000029
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.1a( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.164811 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.1a( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.193686 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.c( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 DELETING pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.169838 1 0.000038
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.c( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.172001 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.c( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.201102 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.1d( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 DELETING pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.177035 1 0.000025
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.1d( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.179243 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.1d( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.213665 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.9( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 DELETING pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.184399 1 0.000040
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.9( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.186675 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.9( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.215976 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.18( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 DELETING pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.191819 1 0.000021
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.18( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.194135 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.18( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.222857 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.1f( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 DELETING pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.199001 1 0.000029
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.1f( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.200777 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.1f( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.230124 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.8( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 DELETING pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.206388 1 0.000024
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.8( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.208237 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.8( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.238224 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.16( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 DELETING pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.213750 1 0.000046
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.16( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.215576 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.16( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.247918 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.3( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 DELETING pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.221190 1 0.000027
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.3( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.222994 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.3( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.253621 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.2( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 DELETING pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.228321 1 0.000030
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.2( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.230214 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.2( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.261014 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.5( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 DELETING pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.235660 1 0.000025
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.5( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.237579 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.5( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.268459 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.f( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 DELETING pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.243151 1 0.000027
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.f( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.245354 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.f( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.276401 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.7( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 DELETING pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.250462 1 0.000026
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.7( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.251683 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.7( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.283531 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.4( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 DELETING pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.257652 1 0.000057
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.4( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.258989 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.4( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.290595 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.1c( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 DELETING pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.265265 1 0.000029
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.1c( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.266651 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.1c( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.297327 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.18( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 DELETING pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.272596 1 0.000024
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.18( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.274046 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.18( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.308779 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.1d( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 DELETING pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.279719 1 0.000022
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.1d( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.281192 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.1d( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.311969 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.19( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 DELETING pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.286963 1 0.000027
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.19( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.288495 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[2.19( empty lb MIN local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.324616 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.2( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 DELETING pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.294668 1 0.000018
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.2( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.296084 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.2( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.327706 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.1e( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 DELETING pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.301794 1 0.000021
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.1e( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.303236 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 42 pg[5.1e( empty lb MIN local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [0] r=-1 lpr=41 pi=[37,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.339681 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 61120512 unmapped: 737280 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 306159 data_alloc: 218103808 data_used: 0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 42 handle_osd_map epochs [42,43], i have 42, src has [1,43]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 729088 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 663552 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 43 handle_osd_map epochs [44,44], i have 43, src has [1,44]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 44 heartbeat osd_stat(store_statfs(0x4fe155000/0x0/0x4ffc00000, data 0x2dd6d/0x71000, compress 0x0/0x0/0x0, omap 0x52a4, meta 0x1a2ad5c), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 638976 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 61227008 unmapped: 630784 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 61267968 unmapped: 589824 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 320891 data_alloc: 218103808 data_used: 0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 61267968 unmapped: 589824 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 44 heartbeat osd_stat(store_statfs(0x4fe156000/0x0/0x4ffc00000, data 0x2f1ed/0x74000, compress 0x0/0x0/0x0, omap 0x552f, meta 0x1a2aad1), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 61276160 unmapped: 581632 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 61276160 unmapped: 581632 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.571439743s of 10.756100655s, submitted: 333
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 61341696 unmapped: 516096 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 44 handle_osd_map epochs [45,46], i have 44, src has [1,46]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 61399040 unmapped: 458752 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 331693 data_alloc: 218103808 data_used: 0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 61431808 unmapped: 425984 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 61456384 unmapped: 401408 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 46 heartbeat osd_stat(store_statfs(0x4fe152000/0x0/0x4ffc00000, data 0x31c83/0x7a000, compress 0x0/0x0/0x0, omap 0x57ba, meta 0x1a2a846), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 46 handle_osd_map epochs [47,47], i have 46, src has [1,47]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 61472768 unmapped: 385024 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 47 handle_osd_map epochs [48,49], i have 47, src has [1,49]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 61489152 unmapped: 368640 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 61489152 unmapped: 368640 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 347710 data_alloc: 218103808 data_used: 0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.e scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.e scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 61505536 unmapped: 352256 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 61538304 unmapped: 319488 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 49 heartbeat osd_stat(store_statfs(0x4fe147000/0x0/0x4ffc00000, data 0x35d37/0x83000, compress 0x0/0x0/0x0, omap 0x5cd0, meta 0x1a2a330), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 49 handle_osd_map epochs [50,51], i have 49, src has [1,51]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 61554688 unmapped: 303104 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 51 handle_osd_map epochs [51,52], i have 51, src has [1,52]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 61456384 unmapped: 401408 heap: 61857792 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 52 pg[6.8(unlocked)] enter Initial
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 52 pg[6.8( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=0 pi=[37,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000111 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 52 pg[6.8( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=0 pi=[37,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 52 pg[6.8( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000036
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 52 pg[6.8( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 52 pg[6.8( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 52 pg[6.8( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 52 pg[6.8( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000015 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 52 pg[6.8( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 52 pg[6.8( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 52 pg[6.8( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 52 pg[6.8( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000172 1 0.000065
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 52 pg[6.8( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.001206398s of 11.043487549s, submitted: 15
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 52 pg[6.8( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=31'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000506 2 0.000091
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 52 pg[6.8( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=31'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 52 pg[6.8( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=31'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 52 pg[6.8( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=31'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 61489152 unmapped: 1417216 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 358231 data_alloc: 218103808 data_used: 0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 52 handle_osd_map epochs [52,53], i have 52, src has [1,53]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 53 pg[6.8( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=31'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.734591 2 0.000068
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 53 pg[6.8( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=31'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.735357 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 53 pg[6.8( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=31'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 53 pg[6.8( v 31'39 (0'0,31'39] local-lis/les=52/53 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=31'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 53 pg[6.8( v 31'39 (0'0,31'39] local-lis/les=52/53 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 53 pg[6.8( v 31'39 (0'0,31'39] local-lis/les=52/53 n=1 ec=37/21 lis/c=52/37 les/c/f=53/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002088 3 0.000261
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 53 pg[6.8( v 31'39 (0'0,31'39] local-lis/les=52/53 n=1 ec=37/21 lis/c=52/37 les/c/f=53/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 53 pg[6.8( v 31'39 (0'0,31'39] local-lis/les=52/53 n=1 ec=37/21 lis/c=52/37 les/c/f=53/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000013 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 53 pg[6.8( v 31'39 (0'0,31'39] local-lis/les=52/53 n=1 ec=37/21 lis/c=52/37 les/c/f=53/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 53 handle_osd_map epochs [52,53], i have 53, src has [1,53]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 53 handle_osd_map epochs [53,53], i have 53, src has [1,53]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 61505536 unmapped: 1400832 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 61497344 unmapped: 1409024 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 61505536 unmapped: 1400832 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 53 handle_osd_map epochs [54,55], i have 53, src has [1,55]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 55 heartbeat osd_stat(store_statfs(0x4fe133000/0x0/0x4ffc00000, data 0x3e035/0x95000, compress 0x0/0x0/0x0, omap 0x66fc, meta 0x1a29904), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 61521920 unmapped: 1384448 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 55 handle_osd_map epochs [55,56], i have 55, src has [1,56]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.a scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.a scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 61530112 unmapped: 1376256 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 373346 data_alloc: 218103808 data_used: 0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 56 handle_osd_map epochs [56,57], i have 56, src has [1,57]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.b scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.b scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 61571072 unmapped: 1335296 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.c scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.c scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 57 heartbeat osd_stat(store_statfs(0x4fe12f000/0x0/0x4ffc00000, data 0x40acb/0x9b000, compress 0x0/0x0/0x0, omap 0x6c12, meta 0x1a293ee), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 57 handle_osd_map epochs [58,58], i have 57, src has [1,58]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 57 handle_osd_map epochs [58,58], i have 58, src has [1,58]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62611456 unmapped: 294912 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62611456 unmapped: 294912 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 58 handle_osd_map epochs [58,59], i have 58, src has [1,59]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62619648 unmapped: 286720 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62627840 unmapped: 278528 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 387999 data_alloc: 218103808 data_used: 0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 59 heartbeat osd_stat(store_statfs(0x4fe125000/0x0/0x4ffc00000, data 0x436f7/0xa1000, compress 0x0/0x0/0x0, omap 0x7128, meta 0x1a28ed8), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.855418205s of 10.907942772s, submitted: 19
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 59 handle_osd_map epochs [60,60], i have 59, src has [1,60]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62636032 unmapped: 270336 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62676992 unmapped: 229376 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62685184 unmapped: 221184 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 60 handle_osd_map epochs [61,61], i have 60, src has [1,61]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 61 pg[6.f(unlocked)] enter Initial
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 61 pg[6.f( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=61) [2] r=0 lpr=0 pi=[45,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000118 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 61 pg[6.f( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=61) [2] r=0 lpr=0 pi=[45,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 61 pg[6.f( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=61) [2] r=0 lpr=61 pi=[45,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000034 1 0.000063
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 61 pg[6.f( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=61) [2] r=0 lpr=61 pi=[45,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 61 pg[6.f( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=61) [2] r=0 lpr=61 pi=[45,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 61 pg[6.f( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=61) [2] r=0 lpr=61 pi=[45,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 61 pg[6.f( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=61) [2] r=0 lpr=61 pi=[45,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000097 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 61 pg[6.f( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=61) [2] r=0 lpr=61 pi=[45,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 61 pg[6.f( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=61) [2] r=0 lpr=61 pi=[45,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 61 pg[6.f( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=61) [2] r=0 lpr=61 pi=[45,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 61 pg[6.f( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=61) [2] r=0 lpr=61 pi=[45,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000324 1 0.000244
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 61 pg[6.f( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=61) [2] r=0 lpr=61 pi=[45,61)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 61 handle_osd_map epochs [60,61], i have 61, src has [1,61]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62693376 unmapped: 212992 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 61 pg[6.f( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=45/46 n=1 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=61) [2] r=0 lpr=61 pi=[45,61)/1 crt=31'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetLog 0.000613 2 0.000083
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 61 pg[6.f( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=45/46 n=1 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=61) [2] r=0 lpr=61 pi=[45,61)/1 crt=31'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 61 pg[6.f( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=45/46 n=1 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=61) [2] r=0 lpr=61 pi=[45,61)/1 crt=31'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 61 pg[6.f( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=45/46 n=1 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=61) [2] r=0 lpr=61 pi=[45,61)/1 crt=31'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 61 handle_osd_map epochs [61,62], i have 61, src has [1,62]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 61 handle_osd_map epochs [61,62], i have 62, src has [1,62]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 62 pg[6.f( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=45/46 n=1 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=61) [2] r=0 lpr=61 pi=[45,61)/1 crt=31'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.761318 2 0.000212
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 62 pg[6.f( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=45/46 n=1 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=61) [2] r=0 lpr=61 pi=[45,61)/1 crt=31'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering 0.762372 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 62 pg[6.f( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=45/46 n=1 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=61) [2] r=0 lpr=61 pi=[45,61)/1 crt=31'39 mlcod 0'0 unknown m=3 mbc={}] enter Started/Primary/Active
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 62 pg[6.f( v 31'39 lc 31'1 (0'0,31'39] local-lis/les=61/62 n=1 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=61) [2] r=0 lpr=61 pi=[45,61)/1 crt=31'39 lcod 0'0 mlcod 0'0 activating+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 62 pg[6.f( v 31'39 lc 31'1 (0'0,31'39] local-lis/les=61/62 n=1 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=61) [2] r=0 lpr=61 pi=[45,61)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 62 pg[6.f( v 31'39 lc 31'1 (0'0,31'39] local-lis/les=61/62 n=1 ec=37/21 lis/c=61/45 les/c/f=62/46/0 sis=61) [2] r=0 lpr=61 pi=[45,61)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.003264 3 0.000203
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 62 pg[6.f( v 31'39 lc 31'1 (0'0,31'39] local-lis/les=61/62 n=1 ec=37/21 lis/c=61/45 les/c/f=62/46/0 sis=61) [2] r=0 lpr=61 pi=[45,61)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 62 pg[6.f( v 31'39 lc 31'1 (0'0,31'39] local-lis/les=61/62 n=1 ec=37/21 lis/c=61/45 les/c/f=62/46/0 sis=61) [2] r=0 lpr=61 pi=[45,61)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000103 1 0.000101
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 62 pg[6.f( v 31'39 lc 31'1 (0'0,31'39] local-lis/les=61/62 n=1 ec=37/21 lis/c=61/45 les/c/f=62/46/0 sis=61) [2] r=0 lpr=61 pi=[45,61)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 62 pg[6.f( v 31'39 lc 31'1 (0'0,31'39] local-lis/les=61/62 n=1 ec=37/21 lis/c=61/45 les/c/f=62/46/0 sis=61) [2] r=0 lpr=61 pi=[45,61)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000005 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 62 pg[6.f( v 31'39 lc 31'1 (0'0,31'39] local-lis/les=61/62 n=1 ec=37/21 lis/c=61/45 les/c/f=62/46/0 sis=61) [2] r=0 lpr=61 pi=[45,61)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 handle_osd_map epochs [62,62], i have 62, src has [1,62]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 62 pg[6.f( v 31'39 (0'0,31'39] local-lis/les=61/62 n=1 ec=37/21 lis/c=61/45 les/c/f=62/46/0 sis=61) [2] r=0 lpr=61 pi=[45,61)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.126923 3 0.000051
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 62 pg[6.f( v 31'39 (0'0,31'39] local-lis/les=61/62 n=1 ec=37/21 lis/c=61/45 les/c/f=62/46/0 sis=61) [2] r=0 lpr=61 pi=[45,61)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 62 pg[6.f( v 31'39 (0'0,31'39] local-lis/les=61/62 n=1 ec=37/21 lis/c=61/45 les/c/f=62/46/0 sis=61) [2] r=0 lpr=61 pi=[45,61)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000025 0 0.000000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 pg_epoch: 62 pg[6.f( v 31'39 (0'0,31'39] local-lis/les=61/62 n=1 ec=37/21 lis/c=61/45 les/c/f=62/46/0 sis=61) [2] r=0 lpr=61 pi=[45,61)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62750720 unmapped: 155648 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 407607 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe11e000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe11e000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62767104 unmapped: 139264 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62775296 unmapped: 131072 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.e scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.e scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe11e000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62668800 unmapped: 237568 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62668800 unmapped: 237568 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62668800 unmapped: 237568 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 411421 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.d scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.d scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62685184 unmapped: 221184 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.859597206s of 10.914040565s, submitted: 21
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62685184 unmapped: 221184 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62701568 unmapped: 204800 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62701568 unmapped: 204800 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62701568 unmapped: 204800 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 416245 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62709760 unmapped: 196608 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62734336 unmapped: 172032 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62742528 unmapped: 163840 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62742528 unmapped: 163840 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62750720 unmapped: 155648 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 421071 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62750720 unmapped: 155648 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62750720 unmapped: 155648 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 4.a scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.099511147s of 11.110563278s, submitted: 6
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 4.a scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62767104 unmapped: 139264 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62783488 unmapped: 122880 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 114688 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 425895 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 114688 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 114688 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62799872 unmapped: 106496 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62799872 unmapped: 106496 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62808064 unmapped: 98304 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 428308 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 4.e scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 4.e scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62808064 unmapped: 98304 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62816256 unmapped: 90112 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.105876923s of 10.125753403s, submitted: 10
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 81920 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 81920 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62840832 unmapped: 65536 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 437958 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62840832 unmapped: 65536 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62840832 unmapped: 65536 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 32768 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 32768 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62881792 unmapped: 24576 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 440371 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62889984 unmapped: 16384 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62898176 unmapped: 8192 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62906368 unmapped: 0 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.043985367s of 11.063570023s, submitted: 6
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 1040384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62922752 unmapped: 1032192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 442782 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.a scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.a scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62922752 unmapped: 1032192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62922752 unmapped: 1032192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 1024000 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62939136 unmapped: 1015808 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.e scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.e scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62947328 unmapped: 1007616 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 452430 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62955520 unmapped: 999424 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62963712 unmapped: 991232 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62971904 unmapped: 983040 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62980096 unmapped: 974848 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.994940758s of 11.102183342s, submitted: 12
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 942080 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 457252 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 942080 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 933888 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 933888 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 925696 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 925696 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 462074 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63045632 unmapped: 909312 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63053824 unmapped: 901120 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63053824 unmapped: 901120 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63062016 unmapped: 892928 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63070208 unmapped: 884736 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 464485 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63070208 unmapped: 884736 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.c scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.941810608s of 11.959441185s, submitted: 8
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.c scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63078400 unmapped: 876544 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63094784 unmapped: 860160 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 843776 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 835584 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 471722 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 835584 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63127552 unmapped: 827392 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 819200 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 811008 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 794624 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 474135 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 794624 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.e scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.860826492s of 10.089083672s, submitted: 10
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.e scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63168512 unmapped: 786432 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63168512 unmapped: 786432 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 778240 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 778240 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 481368 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 778240 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 6.f scrub starts
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 6.f scrub ok
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 770048 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 753664 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 745472 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 745472 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 745472 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 737280 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 737280 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 729088 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 720896 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63242240 unmapped: 712704 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63242240 unmapped: 712704 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63242240 unmapped: 712704 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 704512 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 704512 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63275008 unmapped: 679936 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 671744 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63291392 unmapped: 663552 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63291392 unmapped: 663552 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63291392 unmapped: 663552 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 655360 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 647168 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 647168 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63315968 unmapped: 638976 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63315968 unmapped: 638976 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63315968 unmapped: 638976 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 630784 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 630784 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63332352 unmapped: 622592 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63332352 unmapped: 622592 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63332352 unmapped: 622592 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63340544 unmapped: 614400 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63348736 unmapped: 606208 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 598016 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 598016 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63373312 unmapped: 581632 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63381504 unmapped: 573440 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 565248 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 565248 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63397888 unmapped: 557056 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 548864 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 548864 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 532480 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 532480 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 524288 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 524288 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 516096 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 516096 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 516096 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63447040 unmapped: 507904 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63447040 unmapped: 507904 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 491520 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 491520 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 491520 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63471616 unmapped: 483328 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63471616 unmapped: 483328 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 475136 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 466944 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 466944 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 458752 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 458752 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 458752 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 450560 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 450560 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 442368 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 434176 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 434176 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 425984 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 425984 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 425984 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 409600 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 409600 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63553536 unmapped: 401408 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63553536 unmapped: 401408 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 393216 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63578112 unmapped: 376832 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63578112 unmapped: 376832 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 368640 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 368640 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 360448 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 360448 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 360448 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 352256 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 352256 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 352256 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 327680 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 327680 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 319488 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 319488 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 319488 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 303104 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 303104 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 303104 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 294912 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 294912 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 270336 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 270336 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 262144 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 262144 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 262144 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 253952 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 253952 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 245760 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63725568 unmapped: 229376 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63725568 unmapped: 229376 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63733760 unmapped: 221184 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63733760 unmapped: 221184 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63733760 unmapped: 221184 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63741952 unmapped: 212992 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63741952 unmapped: 212992 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63733760 unmapped: 221184 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63741952 unmapped: 212992 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63741952 unmapped: 212992 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63750144 unmapped: 204800 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63750144 unmapped: 204800 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63766528 unmapped: 188416 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63774720 unmapped: 180224 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63791104 unmapped: 163840 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63799296 unmapped: 155648 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63799296 unmapped: 155648 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 147456 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 147456 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 147456 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 139264 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 139264 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 131072 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 122880 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 122880 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 114688 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 114688 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 106496 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 106496 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 106496 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 98304 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 98304 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 98304 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63864832 unmapped: 90112 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63864832 unmapped: 90112 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 81920 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 81920 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 81920 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63881216 unmapped: 73728 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63881216 unmapped: 73728 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63881216 unmapped: 73728 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 65536 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 65536 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 57344 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 57344 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 57344 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63905792 unmapped: 49152 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 40960 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 32768 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 32768 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 24576 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 24576 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 24576 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 16384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 16384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 8192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 8192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 8192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 0 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 0 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 1040384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 1040384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 1032192 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 1032192 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 1032192 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 1024000 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 1024000 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 1024000 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1007616 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1007616 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 999424 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 999424 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 999424 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 991232 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 991232 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 983040 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 983040 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 983040 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 974848 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 974848 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 966656 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 966656 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 966656 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 958464 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 958464 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 958464 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 950272 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 950272 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 942080 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 942080 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 933888 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 933888 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 942080 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 933888 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 933888 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 933888 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64077824 unmapped: 925696 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64077824 unmapped: 925696 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 917504 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 917504 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 909312 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 909312 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 901120 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 901120 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 901120 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 892928 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 892928 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 892928 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 884736 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 884736 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 876544 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 876544 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 876544 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 868352 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 868352 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 860160 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 860160 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 860160 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 851968 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 851968 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 843776 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 843776 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 843776 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 835584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 835584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 835584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 827392 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 827392 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 819200 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 819200 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 819200 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64192512 unmapped: 811008 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64192512 unmapped: 811008 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64192512 unmapped: 811008 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64200704 unmapped: 802816 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64200704 unmapped: 802816 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 794624 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 794624 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64217088 unmapped: 786432 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64217088 unmapped: 786432 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64217088 unmapped: 786432 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 778240 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 778240 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64233472 unmapped: 770048 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64233472 unmapped: 770048 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64241664 unmapped: 761856 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64241664 unmapped: 761856 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64241664 unmapped: 761856 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64249856 unmapped: 753664 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64249856 unmapped: 753664 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64249856 unmapped: 753664 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64258048 unmapped: 745472 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64258048 unmapped: 745472 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 737280 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 737280 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 737280 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 729088 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 729088 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 729088 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 720896 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 720896 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 679936 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 679936 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 679936 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 671744 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 671744 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 663552 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 663552 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 663552 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64348160 unmapped: 655360 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64348160 unmapped: 655360 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64348160 unmapped: 655360 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 647168 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 647168 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64364544 unmapped: 638976 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64364544 unmapped: 638976 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64364544 unmapped: 638976 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 630784 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 630784 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 630784 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64380928 unmapped: 622592 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64380928 unmapped: 622592 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64389120 unmapped: 614400 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64389120 unmapped: 614400 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64397312 unmapped: 606208 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64397312 unmapped: 606208 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64397312 unmapped: 606208 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 598016 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 598016 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 598016 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64413696 unmapped: 589824 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64413696 unmapped: 589824 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64413696 unmapped: 589824 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64421888 unmapped: 581632 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64421888 unmapped: 581632 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64430080 unmapped: 573440 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64430080 unmapped: 573440 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64430080 unmapped: 573440 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64438272 unmapped: 565248 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64438272 unmapped: 565248 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64446464 unmapped: 557056 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64446464 unmapped: 557056 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64454656 unmapped: 548864 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64454656 unmapped: 548864 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64454656 unmapped: 548864 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64462848 unmapped: 540672 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64471040 unmapped: 532480 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64471040 unmapped: 532480 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64479232 unmapped: 524288 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64479232 unmapped: 524288 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64479232 unmapped: 524288 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64487424 unmapped: 516096 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64487424 unmapped: 516096 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64512000 unmapped: 491520 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64512000 unmapped: 491520 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64512000 unmapped: 491520 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64520192 unmapped: 483328 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64520192 unmapped: 483328 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64528384 unmapped: 475136 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64528384 unmapped: 475136 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64528384 unmapped: 475136 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64536576 unmapped: 466944 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64536576 unmapped: 466944 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64536576 unmapped: 466944 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64552960 unmapped: 450560 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64552960 unmapped: 450560 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64552960 unmapped: 450560 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64561152 unmapped: 442368 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64561152 unmapped: 442368 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64569344 unmapped: 434176 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64569344 unmapped: 434176 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64569344 unmapped: 434176 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64577536 unmapped: 425984 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64577536 unmapped: 425984 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64585728 unmapped: 417792 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64585728 unmapped: 417792 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64585728 unmapped: 417792 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64593920 unmapped: 409600 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64593920 unmapped: 409600 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64602112 unmapped: 401408 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64602112 unmapped: 401408 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64602112 unmapped: 401408 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64610304 unmapped: 393216 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64610304 unmapped: 393216 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64618496 unmapped: 385024 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64618496 unmapped: 385024 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64618496 unmapped: 385024 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64626688 unmapped: 376832 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64626688 unmapped: 376832 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64634880 unmapped: 368640 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64634880 unmapped: 368640 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64634880 unmapped: 368640 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64643072 unmapped: 360448 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64643072 unmapped: 360448 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64643072 unmapped: 360448 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64651264 unmapped: 352256 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64651264 unmapped: 352256 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 335872 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 335872 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 327680 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 327680 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 327680 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64684032 unmapped: 319488 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64684032 unmapped: 319488 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64684032 unmapped: 319488 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 311296 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 311296 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 311296 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 311296 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64700416 unmapped: 303104 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64700416 unmapped: 303104 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64700416 unmapped: 303104 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64708608 unmapped: 294912 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64708608 unmapped: 294912 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64716800 unmapped: 286720 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64716800 unmapped: 286720 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64716800 unmapped: 286720 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64724992 unmapped: 278528 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64724992 unmapped: 278528 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64724992 unmapped: 278528 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64733184 unmapped: 270336 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64733184 unmapped: 270336 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64733184 unmapped: 270336 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 262144 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 262144 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 4150 writes, 19K keys, 4150 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 4150 writes, 366 syncs, 11.34 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4150 writes, 19K keys, 4150 commit groups, 1.0 writes per commit group, ingest: 16.19 MB, 0.03 MB/s#012Interval WAL: 4150 writes, 366 syncs, 11.34 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559f074b98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559f074b98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 172032 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 172032 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 163840 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 163840 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 163840 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 155648 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 155648 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 155648 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 147456 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 147456 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 139264 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 139264 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 139264 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 131072 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 131072 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 122880 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 122880 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 122880 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 114688 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 114688 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64897024 unmapped: 106496 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64897024 unmapped: 106496 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64897024 unmapped: 106496 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64905216 unmapped: 98304 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64905216 unmapped: 98304 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64921600 unmapped: 81920 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64921600 unmapped: 81920 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64921600 unmapped: 81920 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64929792 unmapped: 73728 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64929792 unmapped: 73728 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64937984 unmapped: 65536 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64937984 unmapped: 65536 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64937984 unmapped: 65536 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64946176 unmapped: 57344 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64946176 unmapped: 57344 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64946176 unmapped: 57344 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64954368 unmapped: 49152 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64954368 unmapped: 49152 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64962560 unmapped: 40960 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64962560 unmapped: 40960 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64970752 unmapped: 32768 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64970752 unmapped: 32768 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64970752 unmapped: 32768 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64978944 unmapped: 24576 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64978944 unmapped: 24576 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64987136 unmapped: 16384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64987136 unmapped: 16384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64987136 unmapped: 16384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 8192 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 8192 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65003520 unmapped: 0 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65003520 unmapped: 0 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65003520 unmapped: 0 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65011712 unmapped: 1040384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65011712 unmapped: 1040384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65019904 unmapped: 1032192 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 1024000 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 1024000 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65036288 unmapped: 1015808 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65036288 unmapped: 1015808 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65036288 unmapped: 1015808 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65044480 unmapped: 1007616 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65044480 unmapped: 1007616 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65044480 unmapped: 1007616 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65052672 unmapped: 999424 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65060864 unmapped: 991232 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65060864 unmapped: 991232 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65060864 unmapped: 991232 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65069056 unmapped: 983040 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65069056 unmapped: 983040 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65069056 unmapped: 983040 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65077248 unmapped: 974848 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65085440 unmapped: 966656 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65085440 unmapped: 966656 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65085440 unmapped: 966656 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65093632 unmapped: 958464 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65093632 unmapped: 958464 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65101824 unmapped: 950272 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65101824 unmapped: 950272 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65101824 unmapped: 950272 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65110016 unmapped: 942080 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65110016 unmapped: 942080 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65110016 unmapped: 942080 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65118208 unmapped: 933888 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65118208 unmapped: 933888 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65118208 unmapped: 933888 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65118208 unmapped: 933888 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65118208 unmapped: 933888 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65126400 unmapped: 925696 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65126400 unmapped: 925696 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 917504 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 917504 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 917504 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 892928 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 892928 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65175552 unmapped: 876544 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65175552 unmapped: 876544 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65175552 unmapped: 876544 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65175552 unmapped: 876544 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65200128 unmapped: 851968 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65200128 unmapped: 851968 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65200128 unmapped: 851968 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65200128 unmapped: 851968 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65200128 unmapped: 851968 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65200128 unmapped: 851968 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65200128 unmapped: 851968 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65200128 unmapped: 851968 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65200128 unmapped: 851968 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65200128 unmapped: 851968 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65257472 unmapped: 794624 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65257472 unmapped: 794624 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65257472 unmapped: 794624 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65257472 unmapped: 794624 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65257472 unmapped: 794624 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65257472 unmapped: 794624 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65257472 unmapped: 794624 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65257472 unmapped: 794624 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65257472 unmapped: 794624 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65257472 unmapped: 794624 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65257472 unmapped: 794624 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65257472 unmapped: 794624 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65257472 unmapped: 794624 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65257472 unmapped: 794624 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14752 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 4150 writes, 19K keys, 4150 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4150 writes, 366 syncs, 11.34 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559f074b98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559f074b98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, 
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65347584 unmapped: 704512 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65347584 unmapped: 704512 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65363968 unmapped: 688128 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65363968 unmapped: 688128 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65363968 unmapped: 688128 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65363968 unmapped: 688128 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65363968 unmapped: 688128 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65363968 unmapped: 688128 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65363968 unmapped: 688128 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 62 handle_osd_map epochs [62,63], i have 62, src has [1,63]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 1103.114379883s of 1103.126831055s, submitted: 6
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65503232 unmapped: 548864 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 63 handle_osd_map epochs [64,64], i have 63, src has [1,64]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65617920 unmapped: 17219584 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 64 handle_osd_map epochs [64,65], i have 64, src has [1,65]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 65 ms_handle_reset con 0x559f08fd3c00 session 0x559f09d6b500
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65699840 unmapped: 17137664 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 65 heartbeat osd_stat(store_statfs(0x4fd914000/0x0/0x4ffc00000, data 0x84bbf8/0x8b6000, compress 0x0/0x0/0x0, omap 0x85f8, meta 0x1a27a08), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65699840 unmapped: 17137664 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 539076 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65830912 unmapped: 17006592 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 65 handle_osd_map epochs [65,66], i have 65, src has [1,66]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 66 ms_handle_reset con 0x559f09d9a800 session 0x559f0b0ea380
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 16982016 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 16982016 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 16982016 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 16982016 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 66 heartbeat osd_stat(store_statfs(0x4fd910000/0x0/0x4ffc00000, data 0x84d204/0x8ba000, compress 0x0/0x0/0x0, omap 0x8924, meta 0x1a276dc), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 543588 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 16982016 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 16982016 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 66 heartbeat osd_stat(store_statfs(0x4fd910000/0x0/0x4ffc00000, data 0x84d204/0x8ba000, compress 0x0/0x0/0x0, omap 0x8924, meta 0x1a276dc), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 16982016 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 16982016 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 16982016 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 543588 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 16982016 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 16982016 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 66 handle_osd_map epochs [67,67], i have 66, src has [1,67]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.887508392s of 16.273990631s, submitted: 31
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 67 heartbeat osd_stat(store_statfs(0x4fd90d000/0x0/0x4ffc00000, data 0x84e6b4/0x8bd000, compress 0x0/0x0/0x0, omap 0x8bf3, meta 0x1a2740d), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 546216 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 67 heartbeat osd_stat(store_statfs(0x4fd90d000/0x0/0x4ffc00000, data 0x84e6b4/0x8bd000, compress 0x0/0x0/0x0, omap 0x8bf3, meta 0x1a2740d), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 67 heartbeat osd_stat(store_statfs(0x4fd90d000/0x0/0x4ffc00000, data 0x84e6b4/0x8bd000, compress 0x0/0x0/0x0, omap 0x8bf3, meta 0x1a2740d), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 546216 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 67 heartbeat osd_stat(store_statfs(0x4fd90d000/0x0/0x4ffc00000, data 0x84e6b4/0x8bd000, compress 0x0/0x0/0x0, omap 0x8bf3, meta 0x1a2740d), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 546216 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 67 heartbeat osd_stat(store_statfs(0x4fd90d000/0x0/0x4ffc00000, data 0x84e6b4/0x8bd000, compress 0x0/0x0/0x0, omap 0x8bf3, meta 0x1a2740d), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 67 heartbeat osd_stat(store_statfs(0x4fd90d000/0x0/0x4ffc00000, data 0x84e6b4/0x8bd000, compress 0x0/0x0/0x0, omap 0x8bf3, meta 0x1a2740d), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 67 heartbeat osd_stat(store_statfs(0x4fd90d000/0x0/0x4ffc00000, data 0x84e6b4/0x8bd000, compress 0x0/0x0/0x0, omap 0x8bf3, meta 0x1a2740d), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 546216 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 67 heartbeat osd_stat(store_statfs(0x4fd90d000/0x0/0x4ffc00000, data 0x84e6b4/0x8bd000, compress 0x0/0x0/0x0, omap 0x8bf3, meta 0x1a2740d), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 546216 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 67 heartbeat osd_stat(store_statfs(0x4fd90d000/0x0/0x4ffc00000, data 0x84e6b4/0x8bd000, compress 0x0/0x0/0x0, omap 0x8bf3, meta 0x1a2740d), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 67 heartbeat osd_stat(store_statfs(0x4fd90d000/0x0/0x4ffc00000, data 0x84e6b4/0x8bd000, compress 0x0/0x0/0x0, omap 0x8bf3, meta 0x1a2740d), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 26.484739304s of 26.491054535s, submitted: 13
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 66101248 unmapped: 16736256 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 67 handle_osd_map epochs [68,68], i have 67, src has [1,68]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 68 ms_handle_reset con 0x559f0b2db800 session 0x559f0b1041c0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 16687104 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 552517 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 16687104 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd908000/0x0/0x4ffc00000, data 0x8500a4/0x8c2000, compress 0x0/0x0/0x0, omap 0x8f21, meta 0x1a270df), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 16588800 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 24756224 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 24715264 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 68 handle_osd_map epochs [69,69], i have 68, src has [1,69]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 69 ms_handle_reset con 0x559f0b2db400 session 0x559f0b09f6c0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 69 heartbeat osd_stat(store_statfs(0x4fc10a000/0x0/0x4ffc00000, data 0x20500a4/0x20c2000, compress 0x0/0x0/0x0, omap 0x9267, meta 0x1a26d99), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 24690688 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 69 handle_osd_map epochs [70,70], i have 69, src has [1,70]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 70 ms_handle_reset con 0x559f0b2dbc00 session 0x559f09d6a540
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 563102 data_alloc: 218103808 data_used: 666
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 23683072 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 70 heartbeat osd_stat(store_statfs(0x4fc105000/0x0/0x4ffc00000, data 0x2051671/0x20c5000, compress 0x0/0x0/0x0, omap 0x95e7, meta 0x1a26a19), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 70 handle_osd_map epochs [70,71], i have 70, src has [1,71]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 71 ms_handle_reset con 0x559f08fd3c00 session 0x559f09ce7180
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 71 ms_handle_reset con 0x559f0b2db400 session 0x559f0b0b1dc0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 71 heartbeat osd_stat(store_statfs(0x4fd8fe000/0x0/0x4ffc00000, data 0x852cc5/0x8ca000, compress 0x0/0x0/0x0, omap 0x9dc1, meta 0x1a2623f), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 23642112 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 71 ms_handle_reset con 0x559f0b2db800 session 0x559f0b0c9c00
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 71 ms_handle_reset con 0x559f09dcbc00 session 0x559f0b0c8380
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 23379968 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 71 heartbeat osd_stat(store_statfs(0x4fd8fe000/0x0/0x4ffc00000, data 0x853ec3/0x8cb000, compress 0x0/0x0/0x0, omap 0xa06d, meta 0x1a25f93), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 23240704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 71 handle_osd_map epochs [72,72], i have 71, src has [1,72]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.315886497s of 10.602708817s, submitted: 111
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 72 ms_handle_reset con 0x559f09dcb800 session 0x559f0b0c8fc0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 72 ms_handle_reset con 0x559f08fd3c00 session 0x559f0af396c0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 72 ms_handle_reset con 0x559f09dcbc00 session 0x559f0b079a40
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 23543808 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 572170 data_alloc: 218103808 data_used: 4743
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 23289856 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 72 handle_osd_map epochs [72,73], i have 72, src has [1,73]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 73 ms_handle_reset con 0x559f08567c00 session 0x559f0b078c40
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 73 ms_handle_reset con 0x559f08566c00 session 0x559f09cc2700
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 68403200 unmapped: 22831104 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 73 handle_osd_map epochs [73,74], i have 73, src has [1,74]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 74 ms_handle_reset con 0x559f0af3bc00 session 0x559f09cc3a40
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 74 ms_handle_reset con 0x559f08566400 session 0x559f0b05cc40
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 74 handle_osd_map epochs [75,75], i have 74, src has [1,75]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 22675456 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 22675456 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 75 heartbeat osd_stat(store_statfs(0x4fd8f1000/0x0/0x4ffc00000, data 0x8595b7/0x8d5000, compress 0x0/0x0/0x0, omap 0xb3a3, meta 0x1a24c5d), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 22675456 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 75 heartbeat osd_stat(store_statfs(0x4fd8f1000/0x0/0x4ffc00000, data 0x8595b7/0x8d5000, compress 0x0/0x0/0x0, omap 0xb3a3, meta 0x1a24c5d), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 580126 data_alloc: 218103808 data_used: 4727
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 22642688 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 68403200 unmapped: 22831104 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 75 ms_handle_reset con 0x559f0af3bc00 session 0x559f0977ea80
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 75 handle_osd_map epochs [76,76], i have 75, src has [1,76]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 68403200 unmapped: 22831104 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 22773760 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 76 handle_osd_map epochs [77,77], i have 76, src has [1,77]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.737900734s of 10.061096191s, submitted: 136
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 77 ms_handle_reset con 0x559f08566c00 session 0x559f0977f880
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 22659072 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 77 handle_osd_map epochs [77,78], i have 77, src has [1,78]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 78 ms_handle_reset con 0x559f09dcbc00 session 0x559f0b0c9340
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 78 heartbeat osd_stat(store_statfs(0x4fd8eb000/0x0/0x4ffc00000, data 0x85d114/0x8df000, compress 0x0/0x0/0x0, omap 0xc01d, meta 0x1a23fe3), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 602228 data_alloc: 218103808 data_used: 12849
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 78 heartbeat osd_stat(store_statfs(0x4fd8eb000/0x0/0x4ffc00000, data 0x85d114/0x8df000, compress 0x0/0x0/0x0, omap 0xc01d, meta 0x1a23fe3), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 68624384 unmapped: 22609920 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 78 handle_osd_map epochs [79,79], i have 78, src has [1,79]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 79 ms_handle_reset con 0x559f0b2dbc00 session 0x559f09ce7180
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 22757376 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 79 handle_osd_map epochs [80,80], i have 79, src has [1,80]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 80 ms_handle_reset con 0x559f09dcbc00 session 0x559f0b0c9880
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 69566464 unmapped: 21667840 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 81 ms_handle_reset con 0x559f08567c00 session 0x559f0977ee00
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 81 handle_osd_map epochs [82,82], i have 81, src has [1,82]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 21610496 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 21610496 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 620604 data_alloc: 218103808 data_used: 12849
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 21610496 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 82 heartbeat osd_stat(store_statfs(0x4fd8d1000/0x0/0x4ffc00000, data 0x86555b/0x8f3000, compress 0x0/0x0/0x0, omap 0xd14b, meta 0x1a22eb5), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 21610496 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 82 handle_osd_map epochs [82,83], i have 82, src has [1,83]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 83 ms_handle_reset con 0x559f08566c00 session 0x559f09cc2c40
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 70721536 unmapped: 20512768 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 83 handle_osd_map epochs [84,84], i have 83, src has [1,84]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 84 ms_handle_reset con 0x559f08566400 session 0x559f09d6bdc0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 70746112 unmapped: 20488192 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 84 handle_osd_map epochs [84,85], i have 84, src has [1,85]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.950626373s of 10.085215569s, submitted: 81
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 85 ms_handle_reset con 0x559f0af3bc00 session 0x559f0af39500
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 19333120 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 85 handle_osd_map epochs [85,86], i have 85, src has [1,86]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 86 ms_handle_reset con 0x559f08566400 session 0x559f0af39c00
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 625457 data_alloc: 218103808 data_used: 12849
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 19038208 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 86 handle_osd_map epochs [87,87], i have 86, src has [1,87]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 87 ms_handle_reset con 0x559f08566c00 session 0x559f0af388c0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 87 heartbeat osd_stat(store_statfs(0x4fd8d1000/0x0/0x4ffc00000, data 0x8695c0/0x8f9000, compress 0x0/0x0/0x0, omap 0xe12c, meta 0x1a21ed4), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72351744 unmapped: 18882560 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 87 handle_osd_map epochs [87,88], i have 87, src has [1,88]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 88 ms_handle_reset con 0x559f08567c00 session 0x559f09cc3180
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 18857984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 18808832 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 88 heartbeat osd_stat(store_statfs(0x4fd8ca000/0x0/0x4ffc00000, data 0x86b287/0x8fc000, compress 0x0/0x0/0x0, omap 0xeb23, meta 0x1a214dd), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 18808832 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 629504 data_alloc: 218103808 data_used: 12849
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 18808832 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 18808832 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 88 heartbeat osd_stat(store_statfs(0x4fd8ca000/0x0/0x4ffc00000, data 0x86b287/0x8fc000, compress 0x0/0x0/0x0, omap 0xeb23, meta 0x1a214dd), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72450048 unmapped: 18784256 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72450048 unmapped: 18784256 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 18767872 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 631348 data_alloc: 218103808 data_used: 12849
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 18767872 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.084028244s of 12.300132751s, submitted: 126
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72531968 unmapped: 18702336 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 89 handle_osd_map epochs [90,90], i have 89, src has [1,90]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 90 ms_handle_reset con 0x559f09dcbc00 session 0x559f0b05c380
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 18694144 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 90 heartbeat osd_stat(store_statfs(0x4fc726000/0x0/0x4ffc00000, data 0x86e168/0x904000, compress 0x0/0x0/0x0, omap 0xf537, meta 0x2bc0ac9), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 18694144 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 18694144 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 638855 data_alloc: 218103808 data_used: 12849
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 18694144 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 18694144 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 90 ms_handle_reset con 0x559f0b2db800 session 0x559f09cc3c00
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 90 heartbeat osd_stat(store_statfs(0x4fc726000/0x0/0x4ffc00000, data 0x86e168/0x904000, compress 0x0/0x0/0x0, omap 0xf537, meta 0x2bc0ac9), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 18604032 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 91 handle_osd_map epochs [91,92], i have 91, src has [1,92]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 92 ms_handle_reset con 0x559f08566c00 session 0x559f0af38000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 92 ms_handle_reset con 0x559f08566400 session 0x559f090328c0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 92 ms_handle_reset con 0x559f08567c00 session 0x559f0977e700
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 18505728 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 18505728 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 646548 data_alloc: 218103808 data_used: 12865
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 18505728 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 18505728 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 92 heartbeat osd_stat(store_statfs(0x4fc71e000/0x0/0x4ffc00000, data 0x870d56/0x90a000, compress 0x0/0x0/0x0, omap 0xfe38, meta 0x2bc01c8), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 18505728 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 18505728 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 18505728 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 646548 data_alloc: 218103808 data_used: 12865
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 18505728 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 18505728 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 92 ms_handle_reset con 0x559f09dcbc00 session 0x559f0b09f880
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 92 handle_osd_map epochs [92,93], i have 92, src has [1,93]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.700133324s of 15.770095825s, submitted: 49
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fc71e000/0x0/0x4ffc00000, data 0x870d56/0x90a000, compress 0x0/0x0/0x0, omap 0xfe38, meta 0x2bc01c8), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 93 ms_handle_reset con 0x559f0b2db400 session 0x559f0b05d180
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 18374656 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 18374656 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 93 ms_handle_reset con 0x559f08566400 session 0x559f0b05ddc0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 93 handle_osd_map epochs [94,94], i have 93, src has [1,94]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 18341888 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 95 ms_handle_reset con 0x559f08566c00 session 0x559f0b05da40
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 660485 data_alloc: 218103808 data_used: 12865
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 95 heartbeat osd_stat(store_statfs(0x4fc717000/0x0/0x4ffc00000, data 0x8737db/0x911000, compress 0x0/0x0/0x0, omap 0x10399, meta 0x2bbfc67), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 17973248 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 17973248 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 17973248 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 95 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x898e2a/0x939000, compress 0x0/0x0/0x0, omap 0x10594, meta 0x2bbfa6c), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 17874944 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 95 handle_osd_map epochs [95,96], i have 95, src has [1,96]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 96 ms_handle_reset con 0x559f0af1d800 session 0x559f09032fc0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 17891328 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 96 handle_osd_map epochs [96,97], i have 96, src has [1,97]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 97 ms_handle_reset con 0x559f0af1d400 session 0x559f0af38700
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 666817 data_alloc: 218103808 data_used: 19521
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 97 heartbeat osd_stat(store_statfs(0x4fc6ee000/0x0/0x4ffc00000, data 0x89a413/0x93c000, compress 0x0/0x0/0x0, omap 0x10a4b, meta 0x2bbf5b5), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 16842752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 97 ms_handle_reset con 0x559f0ae9fc00 session 0x559f0b05c540
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 97 ms_handle_reset con 0x559f08566400 session 0x559f0b078700
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 16678912 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 97 handle_osd_map epochs [98,98], i have 97, src has [1,98]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.905013084s of 10.014015198s, submitted: 64
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 98 ms_handle_reset con 0x559f08566c00 session 0x559f099688c0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 74588160 unmapped: 16646144 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 98 handle_osd_map epochs [99,99], i have 98, src has [1,99]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 99 ms_handle_reset con 0x559f0af1d400 session 0x559f0aa6cc40
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 16629760 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 100 ms_handle_reset con 0x559f0af1d800 session 0x559f0977fc00
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 16605184 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 100 heartbeat osd_stat(store_statfs(0x4fc6e2000/0x0/0x4ffc00000, data 0x89fcaa/0x948000, compress 0x0/0x0/0x0, omap 0x11e47, meta 0x2bbe1b9), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 677227 data_alloc: 218103808 data_used: 19505
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 16605184 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 16605184 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 16605184 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 100 ms_handle_reset con 0x559f0ae9f000 session 0x559f0b05d500
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 100 ms_handle_reset con 0x559f08566c00 session 0x559f0b05c8c0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 100 ms_handle_reset con 0x559f08566400 session 0x559f0b0c9180
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 100 ms_handle_reset con 0x559f0af1d400 session 0x559f0b0d8a80
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 100 ms_handle_reset con 0x559f0af1d800 session 0x559f0af39500
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 100 ms_handle_reset con 0x559f08568400 session 0x559f09d6bc00
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 100 ms_handle_reset con 0x559f08566400 session 0x559f0b079dc0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 16408576 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 100 heartbeat osd_stat(store_statfs(0x4fc6e2000/0x0/0x4ffc00000, data 0x89fcaa/0x948000, compress 0x0/0x0/0x0, omap 0x11f49, meta 0x2bbe0b7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 16408576 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 100 ms_handle_reset con 0x559f08566c00 session 0x559f0b0eb6c0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 678372 data_alloc: 218103808 data_used: 20137
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 100 heartbeat osd_stat(store_statfs(0x4fc6e2000/0x0/0x4ffc00000, data 0x89fcaa/0x948000, compress 0x0/0x0/0x0, omap 0x11f49, meta 0x2bbe0b7), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 16392192 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 16392192 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 100 handle_osd_map epochs [100,101], i have 100, src has [1,101]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.915824890s of 10.005084038s, submitted: 54
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 16392192 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 16457728 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 101 ms_handle_reset con 0x559f0b2dbc00 session 0x559f08c38540
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 102 ms_handle_reset con 0x559f0b2db400 session 0x559f090328c0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 102 ms_handle_reset con 0x559f0b2ea400 session 0x559f0977ea80
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 102 ms_handle_reset con 0x559f0b2ea000 session 0x559f0aa6ddc0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 102 ms_handle_reset con 0x559f08566400 session 0x559f0b05da40
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 16064512 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 102 ms_handle_reset con 0x559f08566c00 session 0x559f0b0c96c0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 102 handle_osd_map epochs [103,103], i have 102, src has [1,103]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 689739 data_alloc: 218103808 data_used: 24268
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 103 heartbeat osd_stat(store_statfs(0x4fc6db000/0x0/0x4ffc00000, data 0x8a2782/0x94f000, compress 0x0/0x0/0x0, omap 0x12951, meta 0x2bbd6af), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 16023552 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 104 ms_handle_reset con 0x559f0b2db400 session 0x559f0b083180
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 16015360 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 104 ms_handle_reset con 0x559f0b2dbc00 session 0x559f09cc21c0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 104 ms_handle_reset con 0x559f0b2dbc00 session 0x559f0aa6d6c0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 104 ms_handle_reset con 0x559f0b2ea800 session 0x559f0b0eaa80
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 15998976 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 104 ms_handle_reset con 0x559f0b2eac00 session 0x559f0af38000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 14901248 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 105 ms_handle_reset con 0x559f0b2eb000 session 0x559f0b082fc0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 14868480 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 694003 data_alloc: 218103808 data_used: 24268
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 105 ms_handle_reset con 0x559f0af1d400 session 0x559f0b0c9340
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 105 ms_handle_reset con 0x559f0af1d800 session 0x559f0977f340
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 14868480 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 105 heartbeat osd_stat(store_statfs(0x4fc6d2000/0x0/0x4ffc00000, data 0x8a69c9/0x958000, compress 0x0/0x0/0x0, omap 0x1384e, meta 0x2bbc7b2), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 106 ms_handle_reset con 0x559f0b2dbc00 session 0x559f09cc2000
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 14860288 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 14819328 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.597695351s of 11.822847366s, submitted: 164
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 14819328 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 107 heartbeat osd_stat(store_statfs(0x4fc6ce000/0x0/0x4ffc00000, data 0x8a948c/0x95c000, compress 0x0/0x0/0x0, omap 0x13f68, meta 0x2bbc098), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 107 ms_handle_reset con 0x559f08567c00 session 0x559f09032e00
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 107 ms_handle_reset con 0x559f09dcbc00 session 0x559f0b078e00
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 14819328 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 107 ms_handle_reset con 0x559f0b2ea800 session 0x559f09033180
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695237 data_alloc: 218103808 data_used: 19252
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 14909440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 107 heartbeat osd_stat(store_statfs(0x4fc6f4000/0x0/0x4ffc00000, data 0x885469/0x937000, compress 0x0/0x0/0x0, omap 0x13f68, meta 0x2bbc098), peers [0,1] op hist [0,0,0,1])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 107 ms_handle_reset con 0x559f08567c00 session 0x559f090328c0
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 107 handle_osd_map epochs [107,108], i have 107, src has [1,108]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 14909440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 108 ms_handle_reset con 0x559f09dcbc00 session 0x559f08c38700
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 108 heartbeat osd_stat(store_statfs(0x4fc6ef000/0x0/0x4ffc00000, data 0x886ad2/0x93a000, compress 0x0/0x0/0x0, omap 0x145c3, meta 0x2bbba3d), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 108 heartbeat osd_stat(store_statfs(0x4fc6ef000/0x0/0x4ffc00000, data 0x886ad2/0x93a000, compress 0x0/0x0/0x0, omap 0x145c3, meta 0x2bbba3d), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 698135 data_alloc: 218103808 data_used: 19252
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 108 heartbeat osd_stat(store_statfs(0x4fc6ef000/0x0/0x4ffc00000, data 0x886ad2/0x93a000, compress 0x0/0x0/0x0, omap 0x145c3, meta 0x2bbba3d), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.643769264s of 10.903412819s, submitted: 68
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6ef000/0x0/0x4ffc00000, data 0x886ad2/0x93a000, compress 0x0/0x0/0x0, omap 0x145c3, meta 0x2bbba3d), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 700845 data_alloc: 218103808 data_used: 23313
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6ed000/0x0/0x4ffc00000, data 0x887f9e/0x93d000, compress 0x0/0x0/0x0, omap 0x14858, meta 0x2bbb7a8), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 109 handle_osd_map epochs [110,110], i have 109, src has [1,110]
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1197832626' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread fragmentation_score=0.000147 took=0.000030s
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 14753792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: do_command 'config diff' '{prefix=config diff}'
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: do_command 'config show' '{prefix=config show}'
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: do_command 'counter dump' '{prefix=counter dump}'
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: do_command 'counter schema' '{prefix=counter schema}'
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 14147584 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76890112 unmapped: 14344192 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76947456 unmapped: 14286848 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:37 np0005544708 ceph-osd[88129]: do_command 'log dump' '{prefix=log dump}'
Dec  3 16:34:38 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14754 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:34:38 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v889: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:34:38 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec  3 16:34:38 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3748945380' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec  3 16:34:38 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14758 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:34:38 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec  3 16:34:38 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2192794819' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec  3 16:34:39 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14762 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:34:39 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec  3 16:34:39 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2014216378' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec  3 16:34:39 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14766 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:34:40 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  3 16:34:40 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3857402893' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec  3 16:34:40 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14770 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:34:40 np0005544708 podman[250314]: 2025-12-03 21:34:40.134774119 +0000 UTC m=+0.074588065 container health_status 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec  3 16:34:40 np0005544708 podman[250317]: 2025-12-03 21:34:40.148367825 +0000 UTC m=+0.099206877 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  3 16:34:40 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v890: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:34:40 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14774 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  3 16:34:40 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec  3 16:34:40 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3806866220' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec  3 16:34:40 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14776 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  3 16:34:41 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Dec  3 16:34:41 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/726762375' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Dec  3 16:34:41 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:34:41 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14780 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  3 16:34:42 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.14784 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  3 16:34:42 np0005544708 ceph-mgr[75500]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  3 16:34:42 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mgr-compute-0-jxauqt[75496]: 2025-12-03T21:34:42.009+0000 7fa8c63a3640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  3 16:34:42 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v891: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:34:42 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Dec  3 16:34:42 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1602125538' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Dec  3 16:34:42 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Dec  3 16:34:42 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2487170743' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.864461899s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974800110s@ mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.864461899s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974800110s@ mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.864461899s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974800110s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.864461899s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974800110s@ mbc={}] exit Start 0.000004 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.864461899s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974800110s@ mbc={}] enter Started/Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.072302 1 0.000031
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.074917 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.074968 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.074984 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.927361488s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 active pruub 92.037757874s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.927351952s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.037757874s@ mbc={}] exit Reset 0.000017 1 0.000030
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.927351952s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.037757874s@ mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.927351952s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.037757874s@ mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.927351952s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.037757874s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.927351952s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.037757874s@ mbc={}] exit Start 0.000004 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.927351952s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.037757874s@ mbc={}] enter Started/Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.135186 14 0.000091
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.143368 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.143496 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.143526 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.864676476s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 95.975151062s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.864668846s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.975151062s@ mbc={}] exit Reset 0.000018 1 0.000032
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.864668846s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.975151062s@ mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.864668846s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.975151062s@ mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.864668846s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.975151062s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.864668846s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.975151062s@ mbc={}] exit Start 0.000006 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.864668846s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.975151062s@ mbc={}] enter Started/Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.135368 14 0.000081
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.143436 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.143633 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.143708 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.864478111s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 95.975067139s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.864466667s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.975067139s@ mbc={}] exit Reset 0.000022 1 0.000044
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.864466667s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.975067139s@ mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.864466667s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.975067139s@ mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.864466667s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.975067139s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.864466667s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.975067139s@ mbc={}] exit Start 0.000007 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.864466667s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.975067139s@ mbc={}] enter Started/Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.070823 1 0.000088
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.075054 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.075175 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.075208 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928797722s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 active pruub 92.039489746s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928786278s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039489746s@ mbc={}] exit Reset 0.000023 1 0.000045
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928786278s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039489746s@ mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928786278s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039489746s@ mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928786278s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039489746s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928786278s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039489746s@ mbc={}] exit Start 0.000004 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928786278s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039489746s@ mbc={}] enter Started/Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.135644 14 0.000067
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.143686 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.143995 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.144031 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.864211082s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 95.974967957s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.864202499s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974967957s@ mbc={}] exit Reset 0.000028 1 0.000029
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.864202499s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974967957s@ mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.864202499s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974967957s@ mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.864202499s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974967957s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.864202499s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974967957s@ mbc={}] exit Start 0.000004 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.864202499s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974967957s@ mbc={}] enter Started/Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.071018 1 0.000052
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.075241 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.075282 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.075300 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928587914s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 active pruub 92.039421082s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928578377s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039421082s@ mbc={}] exit Reset 0.000018 1 0.000032
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928578377s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039421082s@ mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928578377s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039421082s@ mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928578377s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039421082s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928578377s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039421082s@ mbc={}] exit Start 0.000004 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928578377s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039421082s@ mbc={}] enter Started/Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.135847 14 0.000080
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.144118 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.144218 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.144241 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.071180 1 0.000065
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.075225 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863828659s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 95.974784851s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.075321 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.075347 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863812447s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974784851s@ mbc={}] exit Reset 0.000041 1 0.000055
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863812447s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974784851s@ mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863812447s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974784851s@ mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863812447s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974784851s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863812447s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974784851s@ mbc={}] exit Start 0.000006 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863812447s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974784851s@ mbc={}] enter Started/Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928451538s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 active pruub 92.039451599s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928428650s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039451599s@ mbc={}] exit Reset 0.000048 1 0.000078
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928428650s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039451599s@ mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.136256 14 0.000065
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928428650s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039451599s@ mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.144335 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928428650s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039451599s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.144440 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928428650s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039451599s@ mbc={}] exit Start 0.000007 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.144540 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928428650s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039451599s@ mbc={}] enter Started/Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863724709s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 95.974784851s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863713264s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974784851s@ mbc={}] exit Reset 0.000022 1 0.000037
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863713264s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974784851s@ mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863713264s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974784851s@ mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863713264s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974784851s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863713264s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974784851s@ mbc={}] exit Start 0.000004 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863713264s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974784851s@ mbc={}] enter Started/Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.071263 1 0.000116
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.075296 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.075417 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.075459 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.136433 14 0.000425
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.144524 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928549767s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 active pruub 92.039695740s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.144672 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928540230s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039695740s@ mbc={}] exit Reset 0.000019 1 0.000032
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.144703 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928540230s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039695740s@ mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928540230s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039695740s@ mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928540230s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039695740s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928540230s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039695740s@ mbc={}] exit Start 0.000004 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928540230s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039695740s@ mbc={}] enter Started/Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863598824s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 95.974769592s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863583565s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974769592s@ mbc={}] exit Reset 0.000045 1 0.000051
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863583565s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974769592s@ mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.136453 14 0.000080
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.144670 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863583565s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974769592s@ mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.144811 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863583565s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974769592s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863583565s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974769592s@ mbc={}] exit Start 0.000007 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.144830 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863583565s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974769592s@ mbc={}] enter Started/Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863542557s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 95.974784851s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863531113s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974784851s@ mbc={}] exit Reset 0.000021 1 0.000043
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863531113s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974784851s@ mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863531113s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974784851s@ mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863531113s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974784851s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863531113s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974784851s@ mbc={}] exit Start 0.000004 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863531113s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974784851s@ mbc={}] enter Started/Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.071248 1 0.000068
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.075328 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.071301 1 0.000047
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.075512 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.075322 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.075467 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.075540 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.075493 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928400040s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 active pruub 92.039718628s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928479195s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 active pruub 92.039802551s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928391457s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039718628s@ mbc={}] exit Reset 0.000018 1 0.000031
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928391457s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039718628s@ mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928391457s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039718628s@ mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928391457s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039718628s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928464890s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039802551s@ mbc={}] exit Reset 0.000030 1 0.000047
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928391457s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039718628s@ mbc={}] exit Start 0.000004 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928391457s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039718628s@ mbc={}] enter Started/Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928464890s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039802551s@ mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928464890s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039802551s@ mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928464890s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039802551s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928464890s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039802551s@ mbc={}] exit Start 0.000007 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928464890s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039802551s@ mbc={}] enter Started/Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.141038 14 0.000093
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.145119 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.145353 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.145378 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.858943939s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 95.970329285s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.858925819s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.970329285s@ mbc={}] exit Reset 0.000030 1 0.000045
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.858925819s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.970329285s@ mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.858925819s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.970329285s@ mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.071366 1 0.000059
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.858925819s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.970329285s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.075404 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.858925819s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.970329285s@ mbc={}] exit Start 0.000005 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.858925819s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.970329285s@ mbc={}] enter Started/Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.075459 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.075507 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928402901s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 active pruub 92.039848328s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928389549s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039848328s@ mbc={}] exit Reset 0.000029 1 0.000046
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.136607 14 0.000144
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.144981 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928389549s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039848328s@ mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.145437 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928389549s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039848328s@ mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928389549s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039848328s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.145464 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928389549s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039848328s@ mbc={}] exit Start 0.000007 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928389549s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039848328s@ mbc={}] enter Started/Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863302231s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 95.974792480s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863292694s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974792480s@ mbc={}] exit Reset 0.000023 1 0.000038
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863292694s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974792480s@ mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863292694s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974792480s@ mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863292694s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974792480s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863292694s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974792480s@ mbc={}] exit Start 0.000004 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.863292694s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.974792480s@ mbc={}] enter Started/Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.071456 1 0.000055
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.075354 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.141402 14 0.000135
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.075469 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.145225 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.145358 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.075498 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.145390 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.858806610s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 95.970382690s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928353310s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 active pruub 92.039932251s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.858794212s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.970382690s@ mbc={}] exit Reset 0.000022 1 0.000035
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.858794212s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.970382690s@ mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928339958s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039932251s@ mbc={}] exit Reset 0.000030 1 0.000057
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.858794212s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.970382690s@ mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928339958s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039932251s@ mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.858794212s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.970382690s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928339958s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039932251s@ mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.858794212s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.970382690s@ mbc={}] exit Start 0.000006 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928339958s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039932251s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.858794212s) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.970382690s@ mbc={}] enter Started/Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928339958s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039932251s@ mbc={}] exit Start 0.000007 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928339958s) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039932251s@ mbc={}] enter Started/Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 7.071529 1 0.000033
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 7.075485 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.141455 14 0.000157
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 7.075542 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.145358 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 11.145486 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] exit Started 7.075563 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 11.145524 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928306580s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 active pruub 92.039985657s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.858775139s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 active pruub 95.970458984s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928293228s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039985657s@ mbc={}] exit Reset 0.000028 1 0.000046
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928293228s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039985657s@ mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928293228s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039985657s@ mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928293228s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039985657s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.858759880s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.970458984s@ mbc={}] exit Reset 0.000030 1 0.000049
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928293228s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039985657s@ mbc={}] exit Start 0.000004 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41 pruub=8.928293228s) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY pruub 92.039985657s@ mbc={}] enter Started/Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.858759880s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.970458984s@ mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.858759880s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.970458984s@ mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.858759880s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.970458984s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.858759880s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.970458984s@ mbc={}] exit Start 0.000007 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41 pruub=12.858759880s) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY pruub 95.970458984s@ mbc={}] enter Started/Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[6.f( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetLog 0.007640 2 0.000043
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[6.f( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[6.f( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[6.f( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.d( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007408 2 0.000038
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.d( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.d( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.d( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[6.d( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.007232 2 0.000040
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[6.d( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[6.d( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[6.d( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.f( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007049 2 0.000028
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.f( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.f( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.f( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[6.3( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.006920 2 0.000021
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[6.3( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[6.3( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[6.3( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.2( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006800 2 0.000020
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.2( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.2( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.2( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.4( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006658 2 0.000021
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.4( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.4( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.4( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.17(unlocked)] enter Initial
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.17( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=0 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000050 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.17( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=0 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.17( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000012
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.17( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.17( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.17( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.17( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.17( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.17( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.17( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.17( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000051 1 0.000024
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.17( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.11(unlocked)] enter Initial
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.11( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=0 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000034 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.11( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=0 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.11( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000009
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.11( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.11( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.11( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.11( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.11( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.11( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.11( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.11( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000050 1 0.000025
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.11( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.15(unlocked)] enter Initial
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.15( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=0 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000047 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.15( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=0 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.15( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000125 1 0.000130
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.15( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.15( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.15( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.15( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.15( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.15( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.15( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.15( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000067 1 0.000029
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.15( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.12(unlocked)] enter Initial
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.12( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=0 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000035 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.12( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=0 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.12( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.12( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.12( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.12( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.12( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.12( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.12( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.12( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.12( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000045 1 0.000025
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.12( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.13(unlocked)] enter Initial
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.13( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=0 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000029 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.13( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=0 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.13( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.13( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.13( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.13( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.13( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000023 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.13( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.13( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.13( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.13( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000045 1 0.000042
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.13( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.16(unlocked)] enter Initial
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.16( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=0 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000033 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.16( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=0 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.16( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000008
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.16( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.16( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.16( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.16( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.16( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.16( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.16( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.16( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000036 1 0.000022
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.16( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.d(unlocked)] enter Initial
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.d( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=0 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000027 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.d( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=0 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.d( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.d( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.d( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.d( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.d( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.d( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.d( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.d( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.d( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000045 1 0.000031
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.d( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.9(unlocked)] enter Initial
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.9( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=0 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000153 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.9( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=0 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.9( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000033 1 0.000049
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.9( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.9( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.9( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.9( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000013 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.9( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.9( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.9( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.9( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000148 1 0.000080
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.9( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.7(unlocked)] enter Initial
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.7( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=0 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000052 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.7( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=0 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.7( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000010
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.7( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.7( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.7( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.7( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.7( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.7( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.7( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.7( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000062 1 0.000030
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.7( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.3(unlocked)] enter Initial
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.3( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=0 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000032 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.3( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=0 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.3( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000008
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.3( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.3( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.3( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.3( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.3( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.3( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.3( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.3( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000039 1 0.000027
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.3( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.4(unlocked)] enter Initial
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.4( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=0 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000032 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.4( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=0 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.4( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000012 1 0.000014
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.4( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.4( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.4( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.4( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.4( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.4( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.4( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.4( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000038 1 0.000021
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.4( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.5(unlocked)] enter Initial
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.5( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=0 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000085 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.5( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=0 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.5( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000011 1 0.000020
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.5( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.5( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.5( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.5( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.5( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.5( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.5( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.5( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000084 1 0.000048
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.5( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.1(unlocked)] enter Initial
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.1( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=0 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000052 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.1( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=0 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.1( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000013
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.1( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.1( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.1( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.1( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.1( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.1( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.1( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.1( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000062 1 0.000032
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.1( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.f(unlocked)] enter Initial
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=0 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000048 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=0 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000086 1 0.000040
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.6(unlocked)] enter Initial
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.6( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=0 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000075 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.6( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=0 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.6( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000011 1 0.000019
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.6( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.6( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.6( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.6( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.6( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.6( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.6( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.6( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000087 1 0.000047
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.6( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.9(unlocked)] enter Initial
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.9( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=0 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000048 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.9( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=0 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.9( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000013
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.9( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.9( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.9( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.9( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.9( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.9( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.9( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.9( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000054 1 0.000033
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.9( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.a(unlocked)] enter Initial
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.a( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=0 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000036 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.a( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=0 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.a( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.a( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.a( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.a( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.a( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.a( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.a( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.a( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.a( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000053 1 0.000036
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.a( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.c(unlocked)] enter Initial
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=0 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000069 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=0 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000026
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000080 1 0.000046
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.1a(unlocked)] enter Initial
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.1a( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=0 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000035 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.1a( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=0 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.1a( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.1a( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.1a( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.1a( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.1a( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.1a( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.1a( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.1a( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.1a( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000044 1 0.000023
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.1a( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.19(unlocked)] enter Initial
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=0 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000033 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=0 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000008
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000047 1 0.000022
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.18(unlocked)] enter Initial
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.18( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=0 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000049 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.18( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=0 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.18( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000008
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.18( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.18( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.18( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.18( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.18( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.18( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.18( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.18( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000061 1 0.000044
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.18( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.019423 2 0.000022
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.019954 2 0.000030
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.016319 2 0.000025
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.016168 2 0.000021
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[6.1( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.023069 2 0.000039
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[6.1( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[6.1( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[6.1( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.5( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.022473 2 0.000019
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.5( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.5( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.5( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[6.7( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.022394 2 0.000017
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[6.7( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[6.7( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[6.7( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[6.9( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.022277 2 0.000018
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[6.9( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[6.9( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[6.9( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.9( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.021967 2 0.000036
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[6.b( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.022125 2 0.000023
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.9( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.9( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[6.b( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.9( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[6.b( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[6.b( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.14( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.021627 2 0.000082
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.14( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.14( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.8( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.021839 2 0.000026
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.14( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.8( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.8( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.8( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.12( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.021460 2 0.000063
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.12( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.12( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.12( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.10( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.021168 2 0.000043
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.10( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.10( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.10( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.7( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.023124 2 0.000027
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.7( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.7( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[4.7( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[6.5( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.023567 2 0.000031
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[6.5( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[6.5( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[6.5( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.016866 2 0.000025
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.13( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.016603 2 0.000018
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.13( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.13( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.13( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.016502 2 0.000016
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.016403 2 0.000029
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.015846 2 0.000085
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.015664 2 0.000022
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.017017 2 0.000019
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.015446 2 0.000019
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.011716 2 0.000036
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.011174 2 0.000026
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000040 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.015655 2 0.000018
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000033 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009529 2 0.000027
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009386 2 0.000022
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008901 2 0.000035
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000018 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000036 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000084 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010162 2 0.000023
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.1a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007575 2 0.000033
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.19( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007427 2 0.000017
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.19( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.1a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.19( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009954 2 0.000036
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.19( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.1a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000033 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.1a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007207 2 0.000019
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000011 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 41 pg[5.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 62611456 unmapped: 294912 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 41 handle_osd_map epochs [41,42], i have 41, src has [1,42]
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 41 handle_osd_map epochs [42,42], i have 42, src has [1,42]
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004047 2 0.000030
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011584 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.d( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004025 2 0.000039
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.d( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 1.011406 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.d( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.f( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004087 2 0.000047
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.f( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011237 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.f( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.3( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004011 2 0.000035
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003997 2 0.000019
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.d( v 31'39 lc 31'13 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.f( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010872 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.1( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.987564 2 0.000022
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.1( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010731 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.1( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.1( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.987406 2 0.000022
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.009125 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.12( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.987426 2 0.000023
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.12( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.008985 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.12( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.10( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.987441 2 0.000029
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.12( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.10( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.008745 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.10( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.10( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.988400 2 0.000020
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.3( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 1.011046 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007913 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.3( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.988003 2 0.000015
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004247 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.4( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004630 2 0.000019
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.4( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011370 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.4( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.b( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.987875 2 0.000058
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.b( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.010095 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.b( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.4( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.b( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.987974 2 0.000021
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010023 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.988415 2 0.000021
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004817 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.982640 2 0.000024
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.999315 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.982426 2 0.000041
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.982785 2 0.000041
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.999751 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.999602 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.f( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005723 2 0.000035
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.f( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering 1.013600 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.f( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 unknown m=3 mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.f( v 31'39 lc 31'1 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 activating+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.983075 2 0.000030
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.999646 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.988845 2 0.000032
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.983085 2 0.000026
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010780 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.999117 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.983348 2 0.000026
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.3( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.999834 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.9( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.989448 2 0.000035
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.9( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011811 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.9( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.9( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.983235 2 0.000086
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.992748 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.983565 2 0.000042
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.999302 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.7( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.989879 2 0.000035
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.7( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.012348 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.7( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.7( v 31'39 lc 31'21 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.990075 2 0.000055
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.012660 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.7( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.989930 2 0.000031
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.7( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.013146 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.7( empty local-lis/les=35/37 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.7( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.5( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.989823 2 0.000068
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.5( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 1.013484 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.5( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.984258 2 0.000115
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.996196 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.5( v 31'39 lc 31'11 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.984441 2 0.000059
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.999954 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.984220 2 0.000031
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.994303 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.984806 2 0.000034
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000568 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.984659 2 0.000058
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.995955 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.984639 2 0.000129
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.984681 2 0.000126
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.994988 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.984859 2 0.000154
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.993894 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991607 2 0.000019
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011693 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.1d( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.994350 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.984824 2 0.000095
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.992500 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 42 handle_osd_map epochs [42,42], i have 42, src has [1,42]
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.984980 2 0.000061
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.992491 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.985220 2 0.000151
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.992622 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=37/38 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005264 4 0.000483
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.d( v 31'39 lc 31'13 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.d( v 31'39 lc 31'13 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.007345 4 0.000294
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.d( v 31'39 lc 31'13 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.1( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.12( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.10( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.4( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.b( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.f( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.f( v 31'39 lc 31'1 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.9( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007641 4 0.000207
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.1( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007566 4 0.000084
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.d( v 31'39 lc 31'13 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000469 1 0.000077
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.d( v 31'39 lc 31'13 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.d( v 31'39 lc 31'13 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000006 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.1( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.d( v 31'39 lc 31'13 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.1( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000013 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.1( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007566 4 0.000092
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.12( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007513 4 0.000075
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.12( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.12( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007447 4 0.000095
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.12( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007328 4 0.000060
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.4( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007279 4 0.000059
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.4( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.4( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.4( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.10( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007582 4 0.000141
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.10( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.10( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.10( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.b( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.007296 4 0.000082
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.b( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007179 4 0.000140
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007016 4 0.000070
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.f( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008118 4 0.000187
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.f( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.f( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.f( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006839 4 0.000047
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006964 4 0.000065
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006868 4 0.000136
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.f( v 31'39 lc 31'1 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.006613 4 0.000194
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.f( v 31'39 lc 31'1 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006444 4 0.000059
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006564 4 0.000062
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006502 4 0.000059
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006273 4 0.000069
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.9( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006038 4 0.000136
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.9( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.9( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.9( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.7( v 31'39 lc 31'21 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.7( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=35/35 les/c/f=37/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008786 4 0.000094
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008404 4 0.000254
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.7( v 31'39 lc 31'21 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.008703 4 0.000154
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.7( v 31'39 lc 31'21 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008368 4 0.000734
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.7( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008363 4 0.000047
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.7( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.7( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[4.7( empty local-lis/les=41/42 n=0 ec=35/18 lis/c=41/35 les/c/f=42/37/0 sis=41) [1] r=0 lpr=41 pi=[35,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.5( v 31'39 lc 31'11 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008358 4 0.000076
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.3( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.1d( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=37/37 les/c/f=38/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008456 4 0.000068
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.5( v 31'39 lc 31'11 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.008512 4 0.000163
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008239 4 0.000050
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.5( v 31'39 lc 31'11 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008203 4 0.000035
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008429 4 0.000129
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.3( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.009964 4 0.002068
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.3( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008008 4 0.000071
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008148 4 0.000105
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.1d( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008020 4 0.000048
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.1d( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.1d( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.1d( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007856 4 0.000053
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008085 4 0.000393
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=41/42 n=0 ec=33/16 lis/c=41/33 les/c/f=42/34/0 sis=41) [1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008114 4 0.000045
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007680 4 0.000052
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.016558 7 0.000126
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000013 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=41/42 n=0 ec=37/19 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.12( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.019142 7 0.000039
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.12( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.12( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.026285 7 0.000053
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.025386 7 0.000041
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.025612 7 0.000073
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.026279 7 0.000104
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030391 7 0.000053
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.031150 7 0.000097
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.027714 7 0.000044
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.028240 7 0.000069
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029597 7 0.000038
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029859 7 0.000053
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030076 7 0.000058
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030432 7 0.000036
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.032746 7 0.000040
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030925 7 0.000067
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030855 7 0.000034
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030852 7 0.000033
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.031334 7 0.000038
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.031263 7 0.000100
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029704 7 0.000036
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029801 7 0.000031
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.031794 7 0.000037
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.1( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.031695 7 0.000041
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.1( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.1( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030591 7 0.000049
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.032150 7 0.000036
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.032429 7 0.000031
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.032849 7 0.000038
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.031720 7 0.000030
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.032659 7 0.000041
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.031445 7 0.000034
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.031323 7 0.000043
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.031751 7 0.000034
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.033107 7 0.000103
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.031382 7 0.000065
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.031044 7 0.000050
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030988 7 0.000061
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.031533 7 0.000029
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.031087 7 0.000033
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.032142 7 0.000059
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.031458 7 0.000041
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.d( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.068217 2 0.000062
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.d( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.d( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.d( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.b( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.068134 2 0.000057
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.b( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.b( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000010 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.b( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 1130496 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.b( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.223052 1 0.000114
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.b( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.b( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000013 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.b( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.f( v 31'39 lc 31'1 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.291114 2 0.000028
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.f( v 31'39 lc 31'1 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.f( v 31'39 lc 31'1 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000009 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.f( v 31'39 lc 31'1 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.f( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.214071 1 0.000126
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.f( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.f( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000024 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.f( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.7( v 31'39 lc 31'21 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.502137 2 0.000025
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.7( v 31'39 lc 31'21 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.7( v 31'39 lc 31'21 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000008 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.7( v 31'39 lc 31'21 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.7( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.058969 1 0.000095
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.7( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.7( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000025 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.7( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.5( v 31'39 lc 31'11 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.560842 2 0.000119
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.5( v 31'39 lc 31'11 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.5( v 31'39 lc 31'11 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000020 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.5( v 31'39 lc 31'11 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.3( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.737629 2 0.000026
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.3( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.3( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000009 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.3( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.5( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.177085 1 0.000085
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.5( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.5( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000013 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.5( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.3( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.114085 1 0.000105
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.3( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.3( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000020 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[6.3( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/37 les/c/f=42/38/0 sis=41) [1] r=0 lpr=41 pi=[37,41)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.851553 1 0.000018
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.12( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.851675 1 0.000013
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.12( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.845792 1 0.000037
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.845848 1 0.000024
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.845995 1 0.000091
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.846167 1 0.000148
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.841339 1 0.000050
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.15( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.841329 1 0.000044
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.841456 1 0.000031
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.841700 1 0.000190
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.17( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.840234 1 0.000067
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.840313 1 0.000033
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.840374 1 0.000018
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.840484 1 0.000124
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.840600 1 0.000054
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.840668 1 0.000019
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.840741 1 0.000018
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.840738 1 0.000023
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.840844 1 0.000024
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.c( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.841225 1 0.000052
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.841351 1 0.000018
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.841412 1 0.000050
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.840687 1 0.000754
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.f( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.1( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.840742 1 0.000017
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.1( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.840834 1 0.000020
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.840906 1 0.000016
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.841008 1 0.000017
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.841114 1 0.000051
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.841167 1 0.000021
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.841113 1 0.000065
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.841159 1 0.000041
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.841244 1 0.000040
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.841192 1 0.000025
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.841273 1 0.000043
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.841316 1 0.000023
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.841489 1 0.000037
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.841544 1 0.000022
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.841609 1 0.000020
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.841621 1 0.000059
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.841657 1 0.000058
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.841711 1 0.000053
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.1f( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 DELETING pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.009797 1 0.000180
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.1f( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.861409 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.1f( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.877998 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.12( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 DELETING pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.015301 1 0.000210
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.12( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.867016 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.12( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.886187 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.18( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 DELETING pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.022171 1 0.000072
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.18( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.868016 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.18( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.894350 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.1c( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 DELETING pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.029487 1 0.000080
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.1c( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.875410 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.1c( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.901725 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.16( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 DELETING pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.036653 1 0.000054
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.16( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.882724 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.16( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.908383 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.11( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 DELETING pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.043878 1 0.000091
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.11( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.890102 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.11( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.915533 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.15( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 DELETING pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.051158 1 0.000049
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.15( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.892578 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.15( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.923014 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.1b( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 DELETING pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.058525 1 0.000070
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.1b( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.899917 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.1b( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.927668 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.9( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 DELETING pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.065727 1 0.000080
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.9( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.907229 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.9( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.935513 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.17( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 DELETING pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.073072 1 0.000046
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.17( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.914853 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.17( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.946064 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.a( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 DELETING pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.080329 1 0.000038
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.a( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.920611 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.a( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.950273 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.3( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 DELETING pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.087647 1 0.000038
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.3( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.928009 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.3( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.957907 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.13( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 DELETING pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.094977 1 0.000036
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.13( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.935396 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.13( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.968174 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.6( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 DELETING pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.102270 1 0.000059
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.6( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.942825 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.6( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.972987 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.f( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 DELETING pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.109735 1 0.000039
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.f( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.950458 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.f( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.981408 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.3( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 DELETING pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.117104 1 0.000113
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.3( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.957750 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.3( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.988208 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.6( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 DELETING pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.124311 1 0.000035
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.6( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.965097 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.6( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.995975 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.4( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 DELETING pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.131483 1 0.000083
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.4( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.972273 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.4( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.003150 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.c( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 DELETING pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.138972 1 0.000361
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.c( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.979871 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[3.c( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.011167 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 42 heartbeat osd_stat(store_statfs(0x4fe0e6000/0x0/0x4ffc00000, data 0x9cc8f/0xe4000, compress 0x0/0x0/0x0, omap 0x742b, meta 0x1a28bd5), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 42 handle_osd_map epochs [43,43], i have 42, src has [1,43]
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.9( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 DELETING pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.148147 1 0.000060
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.9( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.989448 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 42 pg[7.9( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.020817 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[3.1b( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 DELETING pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.153098 4 0.000064
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[3.1b( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.994508 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[3.1b( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.024237 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[7.1f( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 DELETING pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.160500 4 0.000088
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[7.1f( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.001965 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[7.1f( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.031820 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[3.f( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 DELETING pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.167859 4 0.000056
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[3.f( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.008616 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[3.f( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.041148 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[3.1( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 DELETING pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.175123 4 0.000044
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[3.1( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.015911 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[3.1( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [0] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.047631 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[7.18( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 DELETING pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.182393 4 0.000087
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[7.18( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.023275 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[7.18( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [0] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.053903 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[7.8( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 DELETING pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.189882 4 0.000063
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[7.8( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.030856 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[7.8( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.063037 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[3.11( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 DELETING pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.197042 4 0.000077
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[3.11( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.038092 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[3.11( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.070965 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[7.a( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 DELETING pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.204241 4 0.000051
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[7.a( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.045411 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[7.a( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.077877 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[7.5( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 DELETING pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.211531 4 0.000076
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[7.5( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.052776 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[7.5( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.084526 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[3.e( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 DELETING pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.218829 4 0.000051
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[3.e( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.059993 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[3.e( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.092704 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[7.2( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 DELETING pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.226098 4 0.000077
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[7.2( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.067301 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[7.2( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.098779 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[3.7( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 DELETING pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.233665 4 0.000067
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[3.7( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.074972 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[3.7( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.106327 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[7.15( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 DELETING pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.241100 4 0.000078
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[7.15( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.082347 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[7.15( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.115483 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[7.1( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 DELETING pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.248354 4 0.000043
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[7.1( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.089689 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[7.1( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.121475 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.e(unlocked)] enter Initial
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.e( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=0 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000170 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.e( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=0 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.e( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000022 1 0.000046
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.e( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.e( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.e( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.e( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000018 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.e( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.e( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.e( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.e( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000230 1 0.000121
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.e( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.2(unlocked)] enter Initial
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.2( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=0 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000120 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.2( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=0 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.2( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000012 1 0.000026
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.2( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.2( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.2( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.2( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.2( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.2( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.2( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.2( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000088 1 0.000045
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.2( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.6(unlocked)] enter Initial
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.6( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=0 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000054 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.6( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=0 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.6( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000017
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.6( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.6( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.6( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.6( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.6( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.6( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.6( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.6( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000067 1 0.000044
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.6( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.a(unlocked)] enter Initial
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.a( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=0 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000086 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.a( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=0 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.a( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000013 1 0.000033
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.a( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.a( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.a( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.a( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.a( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.a( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.a( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.a( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000135 1 0.000062
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.a( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.2( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001474 2 0.000041
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.2( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.2( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.2( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.6( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.001325 2 0.000353
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.6( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.6( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.6( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.a( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000647 2 0.000064
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.a( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.a( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.a( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.e( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.002226 2 0.000075
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.e( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.e( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[6.e( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[7.c( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 DELETING pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.257454 4 0.000083
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[7.c( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.098844 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[7.c( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.130255 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[3.1d( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 DELETING pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.264106 4 0.000051
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[3.1d( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.105652 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[3.1d( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.136722 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[7.1a( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 DELETING pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.269901 4 0.000081
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[7.1a( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.111487 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[7.1a( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.142503 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[3.8( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 DELETING pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.277365 4 0.000042
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[3.8( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.119015 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[3.8( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.150568 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[3.1e( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 DELETING pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.284566 4 0.000042
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[3.1e( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.126233 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[3.1e( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.157369 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[3.5( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 DELETING pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.291974 4 0.000050
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[3.5( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.133686 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[3.5( empty lb MIN local-lis/les=35/36 n=0 ec=35/17 lis/c=35/35 les/c/f=36/36/0 sis=41) [2] r=-1 lpr=41 pi=[35,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.165877 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[7.e( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 DELETING pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.299386 4 0.000031
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[7.e( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.141150 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 43 pg[7.e( empty lb MIN local-lis/les=39/40 n=0 ec=39/23 lis/c=39/39 les/c/f=40/40/0 sis=41) [2] r=-1 lpr=41 pi=[39,41)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.172649 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 361585 data_alloc: 218103808 data_used: 0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 1982464 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 43 handle_osd_map epochs [43,44], i have 43, src has [1,44]
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.e( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.923857 2 0.000167
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.6( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.924093 2 0.000079
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.6( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.925555 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.6( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.a( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.924129 2 0.000046
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.a( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.924965 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.a( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.a( v 31'39 (0'0,31'39] local-lis/les=43/44 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.6( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=43/44 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.e( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.926398 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.e( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.e( v 31'39 lc 31'19 (0'0,31'39] local-lis/les=43/44 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.2( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.924929 2 0.000078
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.2( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.926561 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.2( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 44 handle_osd_map epochs [44,44], i have 44, src has [1,44]
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.2( v 31'39 (0'0,31'39] local-lis/les=43/44 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.a( v 31'39 (0'0,31'39] local-lis/les=43/44 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.6( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=43/44 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.a( v 31'39 (0'0,31'39] local-lis/les=43/44 n=1 ec=37/21 lis/c=43/37 les/c/f=44/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002694 4 0.000117
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.a( v 31'39 (0'0,31'39] local-lis/les=43/44 n=1 ec=37/21 lis/c=43/37 les/c/f=44/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.6( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=43/44 n=2 ec=37/21 lis/c=43/37 les/c/f=44/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.002750 4 0.000156
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.a( v 31'39 (0'0,31'39] local-lis/les=43/44 n=1 ec=37/21 lis/c=43/37 les/c/f=44/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000024 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.a( v 31'39 (0'0,31'39] local-lis/les=43/44 n=1 ec=37/21 lis/c=43/37 les/c/f=44/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.6( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=43/44 n=2 ec=37/21 lis/c=43/37 les/c/f=44/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.6( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=43/44 n=2 ec=37/21 lis/c=43/37 les/c/f=44/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000333 1 0.000195
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.6( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=43/44 n=2 ec=37/21 lis/c=43/37 les/c/f=44/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.6( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=43/44 n=2 ec=37/21 lis/c=43/37 les/c/f=44/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000008 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.6( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=43/44 n=2 ec=37/21 lis/c=43/37 les/c/f=44/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.e( v 31'39 lc 31'19 (0'0,31'39] local-lis/les=43/44 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.2( v 31'39 (0'0,31'39] local-lis/les=43/44 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.e( v 31'39 lc 31'19 (0'0,31'39] local-lis/les=43/44 n=1 ec=37/21 lis/c=43/37 les/c/f=44/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.005096 5 0.000486
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.e( v 31'39 lc 31'19 (0'0,31'39] local-lis/les=43/44 n=1 ec=37/21 lis/c=43/37 les/c/f=44/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.2( v 31'39 (0'0,31'39] local-lis/les=43/44 n=2 ec=37/21 lis/c=43/37 les/c/f=44/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004689 4 0.000126
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.2( v 31'39 (0'0,31'39] local-lis/les=43/44 n=2 ec=37/21 lis/c=43/37 les/c/f=44/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.2( v 31'39 (0'0,31'39] local-lis/les=43/44 n=2 ec=37/21 lis/c=43/37 les/c/f=44/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000017 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.2( v 31'39 (0'0,31'39] local-lis/les=43/44 n=2 ec=37/21 lis/c=43/37 les/c/f=44/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.6( v 31'39 (0'0,31'39] local-lis/les=43/44 n=2 ec=37/21 lis/c=43/37 les/c/f=44/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.008079 2 0.000090
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.6( v 31'39 (0'0,31'39] local-lis/les=43/44 n=2 ec=37/21 lis/c=43/37 les/c/f=44/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.6( v 31'39 (0'0,31'39] local-lis/les=43/44 n=2 ec=37/21 lis/c=43/37 les/c/f=44/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000017 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.6( v 31'39 (0'0,31'39] local-lis/les=43/44 n=2 ec=37/21 lis/c=43/37 les/c/f=44/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.e( v 31'39 lc 31'19 (0'0,31'39] local-lis/les=43/44 n=1 ec=37/21 lis/c=43/37 les/c/f=44/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.006019 1 0.000072
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.e( v 31'39 lc 31'19 (0'0,31'39] local-lis/les=43/44 n=1 ec=37/21 lis/c=43/37 les/c/f=44/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.e( v 31'39 lc 31'19 (0'0,31'39] local-lis/les=43/44 n=1 ec=37/21 lis/c=43/37 les/c/f=44/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000011 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.e( v 31'39 lc 31'19 (0'0,31'39] local-lis/les=43/44 n=1 ec=37/21 lis/c=43/37 les/c/f=44/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.e( v 31'39 (0'0,31'39] local-lis/les=43/44 n=1 ec=37/21 lis/c=43/37 les/c/f=44/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.066848 1 0.000091
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.e( v 31'39 (0'0,31'39] local-lis/les=43/44 n=1 ec=37/21 lis/c=43/37 les/c/f=44/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.e( v 31'39 (0'0,31'39] local-lis/les=43/44 n=1 ec=37/21 lis/c=43/37 les/c/f=44/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000018 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 44 pg[6.e( v 31'39 (0'0,31'39] local-lis/les=43/44 n=1 ec=37/21 lis/c=43/37 les/c/f=44/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 63127552 unmapped: 1875968 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 63086592 unmapped: 1916928 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 44 heartbeat osd_stat(store_statfs(0x4fe0dc000/0x0/0x4ffc00000, data 0x9f725/0xea000, compress 0x0/0x0/0x0, omap 0x7aef, meta 0x1a28511), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 1900544 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 44 heartbeat osd_stat(store_statfs(0x4fe0e2000/0x0/0x4ffc00000, data 0x9f725/0xea000, compress 0x0/0x0/0x0, omap 0x7aef, meta 0x1a28511), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 44 heartbeat osd_stat(store_statfs(0x4fe0e2000/0x0/0x4ffc00000, data 0x9f725/0xea000, compress 0x0/0x0/0x0, omap 0x7aef, meta 0x1a28511), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 1900544 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.783351898s of 10.039956093s, submitted: 421
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 44 heartbeat osd_stat(store_statfs(0x4fe0e2000/0x0/0x4ffc00000, data 0x9f725/0xea000, compress 0x0/0x0/0x0, omap 0x7aef, meta 0x1a28511), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 372816 data_alloc: 218103808 data_used: 0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 1900544 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 1900544 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 1900544 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 44 handle_osd_map epochs [44,45], i have 44, src has [1,45]
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.b( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=41) [1] r=0 lpr=41 crt=31'39 mlcod 31'39 active+clean] exit Started/Primary/Active/Clean 8.811632 8 0.000173
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.b( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=41) [1] r=0 lpr=41 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started/Primary/Active 9.110325 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.b( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=41) [1] r=0 lpr=41 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started/Primary 10.120440 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.b( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=41) [1] r=0 lpr=41 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started 10.120484 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.b( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=41) [1] r=0 lpr=41 crt=31'39 mlcod 31'39 active mbc={255={}}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.b( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=14.897034645s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 active pruub 108.124214172s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.7( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=41) [1] r=0 lpr=41 crt=31'39 mlcod 31'39 active+clean] exit Started/Primary/Active/Clean 8.538527 8 0.000156
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.7( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=41) [1] r=0 lpr=41 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started/Primary/Active 9.108539 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.b( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=14.896965027s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 unknown NOTIFY pruub 108.124214172s@ mbc={}] exit Reset 0.000127 1 0.000214
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.7( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=41) [1] r=0 lpr=41 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started/Primary 10.120953 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.b( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=14.896965027s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 unknown NOTIFY pruub 108.124214172s@ mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.b( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=14.896965027s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 unknown NOTIFY pruub 108.124214172s@ mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.b( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=14.896965027s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 unknown NOTIFY pruub 108.124214172s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.7( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=41) [1] r=0 lpr=41 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started 10.120984 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.7( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=41) [1] r=0 lpr=41 crt=31'39 mlcod 31'39 active mbc={255={}}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.b( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=14.896965027s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 unknown NOTIFY pruub 108.124214172s@ mbc={}] exit Start 0.000021 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.b( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=14.896965027s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 unknown NOTIFY pruub 108.124214172s@ mbc={}] enter Started/Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.7( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=14.900068283s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 active pruub 108.127418518s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.7( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=14.900019646s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 unknown NOTIFY pruub 108.127418518s@ mbc={}] exit Reset 0.000094 1 0.000194
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.7( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=14.900019646s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 unknown NOTIFY pruub 108.127418518s@ mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.7( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=14.900019646s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 unknown NOTIFY pruub 108.127418518s@ mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.7( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=14.900019646s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 unknown NOTIFY pruub 108.127418518s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.7( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=14.900019646s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 unknown NOTIFY pruub 108.127418518s@ mbc={}] exit Start 0.000019 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.3( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=41) [1] r=0 lpr=41 crt=31'39 mlcod 31'39 active+clean] exit Started/Primary/Active/Clean 8.247598 8 0.000177
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.7( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=14.900019646s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 unknown NOTIFY pruub 108.127418518s@ mbc={}] enter Started/Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.3( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=41) [1] r=0 lpr=41 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started/Primary/Active 9.110921 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.3( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=41) [1] r=0 lpr=41 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started/Primary 10.122479 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.3( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=41) [1] r=0 lpr=41 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started 10.122501 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.3( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=41) [1] r=0 lpr=41 crt=31'39 mlcod 31'39 active mbc={255={}}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.3( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=14.900277138s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 active pruub 108.127799988s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.3( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=14.900236130s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 unknown NOTIFY pruub 108.127799988s@ mbc={}] exit Reset 0.000072 1 0.000107
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.3( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=14.900236130s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 unknown NOTIFY pruub 108.127799988s@ mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.3( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=14.900236130s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 unknown NOTIFY pruub 108.127799988s@ mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.3( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=14.900236130s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 unknown NOTIFY pruub 108.127799988s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.3( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=14.900236130s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 unknown NOTIFY pruub 108.127799988s@ mbc={}] exit Start 0.000011 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.3( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=14.900236130s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 unknown NOTIFY pruub 108.127799988s@ mbc={}] enter Started/Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.f( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=41) [1] r=0 lpr=41 crt=31'39 mlcod 31'39 active+clean] exit Started/Primary/Active/Clean 8.598043 8 0.000157
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.f( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=41) [1] r=0 lpr=41 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started/Primary/Active 9.110061 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.f( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=41) [1] r=0 lpr=41 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started/Primary 10.123687 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.f( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=41) [1] r=0 lpr=41 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started 10.123725 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.f( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=41) [1] r=0 lpr=41 crt=31'39 mlcod 31'39 active mbc={255={}}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.f( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=14.896610260s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 active pruub 108.124343872s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.f( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=14.896554947s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 unknown NOTIFY pruub 108.124343872s@ mbc={}] exit Reset 0.000096 1 0.000148
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.f( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=14.896554947s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 unknown NOTIFY pruub 108.124343872s@ mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.f( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=14.896554947s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 unknown NOTIFY pruub 108.124343872s@ mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.f( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=14.896554947s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 unknown NOTIFY pruub 108.124343872s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.f( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=14.896554947s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 unknown NOTIFY pruub 108.124343872s@ mbc={}] exit Start 0.000020 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 45 pg[6.f( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=14.896554947s) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 unknown NOTIFY pruub 108.124343872s@ mbc={}] enter Started/Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 45 handle_osd_map epochs [45,45], i have 45, src has [1,45]
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 1892352 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 45 handle_osd_map epochs [45,46], i have 45, src has [1,46]
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.f( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 unknown NOTIFY mbc={}] exit Started/Stray 1.024282 7 0.000122
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.f( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.f( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.3( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 unknown NOTIFY mbc={}] exit Started/Stray 1.024614 7 0.000064
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.3( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.b( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 unknown NOTIFY mbc={}] exit Started/Stray 1.024911 7 0.000110
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.3( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.b( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.7( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 unknown NOTIFY mbc={}] exit Started/Stray 1.024826 7 0.000104
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.b( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.7( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.7( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 crt=31'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 46 handle_osd_map epochs [46,46], i have 46, src has [1,46]
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.b( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 pct=0'0 crt=31'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.011369 2 0.000086
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.b( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 pct=0'0 crt=31'39 active mbc={}] exit Started/ReplicaActive 0.011413 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.b( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 pct=0'0 crt=31'39 active mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.b( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 pct=0'0 crt=31'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.b( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 pct=0'0 crt=31'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000143 1 0.000080
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.b( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 pct=0'0 crt=31'39 active mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.b( v 31'39 (0'0,31'39] lb MIN local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[41,45)/1 pct=0'0 crt=31'39 active mbc={}] exit Started/ToDelete/Deleting 0.123994 2 0.000241
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.b( v 31'39 (0'0,31'39] lb MIN local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 pct=0'0 crt=31'39 active mbc={}] exit Started/ToDelete 0.124251 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.b( v 31'39 (0'0,31'39] lb MIN local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 pct=0'0 crt=31'39 active mbc={}] exit Started 1.160684 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 1884160 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.f( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 pct=0'0 crt=31'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.205433 2 0.000047
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.f( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 pct=0'0 crt=31'39 active mbc={}] exit Started/ReplicaActive 0.205474 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.f( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 pct=0'0 crt=31'39 active mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.f( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 pct=0'0 crt=31'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.f( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 pct=0'0 crt=31'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000132 1 0.000092
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.f( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 pct=0'0 crt=31'39 active mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.7( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 pct=0'0 crt=31'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.264005 2 0.000029
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.7( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 pct=0'0 crt=31'39 active mbc={}] exit Started/ReplicaActive 0.264038 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.7( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 pct=0'0 crt=31'39 active mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.7( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 pct=0'0 crt=31'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.7( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 pct=0'0 crt=31'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000096 1 0.000063
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.7( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 pct=0'0 crt=31'39 active mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.f( v 31'39 (0'0,31'39] lb MIN local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[41,45)/1 pct=0'0 crt=31'39 active mbc={}] exit Started/ToDelete/Deleting 0.137502 2 0.000212
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.f( v 31'39 (0'0,31'39] lb MIN local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 pct=0'0 crt=31'39 active mbc={}] exit Started/ToDelete 0.137695 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.f( v 31'39 (0'0,31'39] lb MIN local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 pct=0'0 crt=31'39 active mbc={}] exit Started 1.367532 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.7( v 31'39 (0'0,31'39] lb MIN local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[41,45)/1 pct=0'0 crt=31'39 active mbc={}] exit Started/ToDelete/Deleting 0.153458 2 0.000223
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.7( v 31'39 (0'0,31'39] lb MIN local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 pct=0'0 crt=31'39 active mbc={}] exit Started/ToDelete 0.153638 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.7( v 31'39 (0'0,31'39] lb MIN local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 pct=0'0 crt=31'39 active mbc={}] exit Started 1.442571 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.3( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 pct=0'0 crt=31'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.419752 2 0.000089
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.3( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 pct=0'0 crt=31'39 active mbc={}] exit Started/ReplicaActive 0.419790 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.3( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 pct=0'0 crt=31'39 active mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.3( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 pct=0'0 crt=31'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.3( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 pct=0'0 crt=31'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000099 1 0.000102
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.3( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 pct=0'0 crt=31'39 active mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.3( v 31'39 (0'0,31'39] lb MIN local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 DELETING pi=[41,45)/1 pct=0'0 crt=31'39 active mbc={}] exit Started/ToDelete/Deleting 0.019383 2 0.000133
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.3( v 31'39 (0'0,31'39] lb MIN local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 pct=0'0 crt=31'39 active mbc={}] exit Started/ToDelete 0.019524 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 46 pg[6.3( v 31'39 (0'0,31'39] lb MIN local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=-1 lpr=45 pi=[41,45)/1 pct=0'0 crt=31'39 active mbc={}] exit Started 1.464016 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 46 heartbeat osd_stat(store_statfs(0x4fe0dd000/0x0/0x4ffc00000, data 0xa0d3b/0xed000, compress 0x0/0x0/0x0, omap 0x7d8d, meta 0x1a28273), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 46 handle_osd_map epochs [46,47], i have 46, src has [1,47]
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 368741 data_alloc: 218103808 data_used: 0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 63168512 unmapped: 1835008 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 1769472 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 47 handle_osd_map epochs [47,48], i have 47, src has [1,48]
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.5( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=41) [1] r=0 lpr=41 crt=31'39 mlcod 31'39 active+clean] exit Started/Primary/Active/Clean 12.410961 17 0.000433
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.5( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=41) [1] r=0 lpr=41 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started/Primary/Active 13.157714 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.5( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=41) [1] r=0 lpr=41 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started/Primary 14.171227 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.5( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=41) [1] r=0 lpr=41 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started 14.171329 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.d( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=41) [1] r=0 lpr=41 crt=31'39 mlcod 31'39 active+clean] exit Started/Primary/Active/Clean 13.084869 17 0.000147
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.5( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=41) [1] r=0 lpr=41 crt=31'39 mlcod 31'39 active mbc={255={}}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.d( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=41) [1] r=0 lpr=41 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started/Primary/Active 13.161128 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.d( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=41) [1] r=0 lpr=41 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started/Primary 14.172559 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.d( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=41) [1] r=0 lpr=41 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started 14.172595 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.d( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=41) [1] r=0 lpr=41 crt=31'39 mlcod 31'39 active mbc={255={}}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.5( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48 pruub=10.850582123s) [0] r=-1 lpr=48 pi=[41,48)/1 crt=31'39 active pruub 108.127700806s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.d( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48 pruub=10.846266747s) [0] r=-1 lpr=48 pi=[41,48)/1 crt=31'39 active pruub 108.123405457s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.5( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48 pruub=10.850494385s) [0] r=-1 lpr=48 pi=[41,48)/1 crt=31'39 unknown NOTIFY pruub 108.127700806s@ mbc={}] exit Reset 0.000157 1 0.000365
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.5( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48 pruub=10.850494385s) [0] r=-1 lpr=48 pi=[41,48)/1 crt=31'39 unknown NOTIFY pruub 108.127700806s@ mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.5( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48 pruub=10.850494385s) [0] r=-1 lpr=48 pi=[41,48)/1 crt=31'39 unknown NOTIFY pruub 108.127700806s@ mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.d( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48 pruub=10.846174240s) [0] r=-1 lpr=48 pi=[41,48)/1 crt=31'39 unknown NOTIFY pruub 108.123405457s@ mbc={}] exit Reset 0.000136 1 0.000189
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.5( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48 pruub=10.850494385s) [0] r=-1 lpr=48 pi=[41,48)/1 crt=31'39 unknown NOTIFY pruub 108.127700806s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.d( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48 pruub=10.846174240s) [0] r=-1 lpr=48 pi=[41,48)/1 crt=31'39 unknown NOTIFY pruub 108.123405457s@ mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.5( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48 pruub=10.850494385s) [0] r=-1 lpr=48 pi=[41,48)/1 crt=31'39 unknown NOTIFY pruub 108.127700806s@ mbc={}] exit Start 0.000018 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.d( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48 pruub=10.846174240s) [0] r=-1 lpr=48 pi=[41,48)/1 crt=31'39 unknown NOTIFY pruub 108.123405457s@ mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.d( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48 pruub=10.846174240s) [0] r=-1 lpr=48 pi=[41,48)/1 crt=31'39 unknown NOTIFY pruub 108.123405457s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.5( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48 pruub=10.850494385s) [0] r=-1 lpr=48 pi=[41,48)/1 crt=31'39 unknown NOTIFY pruub 108.127700806s@ mbc={}] enter Started/Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.d( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48 pruub=10.846174240s) [0] r=-1 lpr=48 pi=[41,48)/1 crt=31'39 unknown NOTIFY pruub 108.123405457s@ mbc={}] exit Start 0.000015 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.d( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48 pruub=10.846174240s) [0] r=-1 lpr=48 pi=[41,48)/1 crt=31'39 unknown NOTIFY pruub 108.123405457s@ mbc={}] enter Started/Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 48 handle_osd_map epochs [47,48], i have 48, src has [1,48]
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.c(unlocked)] enter Initial
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.c( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=0 pi=[37,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000104 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.c( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=0 pi=[37,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.c( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000020 1 0.000039
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.c( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.c( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.c( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.c( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.c( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.c( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.c( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.c( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000142 1 0.000061
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.c( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.4(unlocked)] enter Initial
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.4( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=0 pi=[37,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000078 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.4( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=0 pi=[37,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.4( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000012 1 0.000030
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.4( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.4( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.4( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.4( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.4( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.4( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.4( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.4( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000108 1 0.000051
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.4( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.c( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=31'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.001060 2 0.000061
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.c( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=31'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.c( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=31'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.c( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=31'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.4( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=31'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering/GetLog 0.000945 2 0.000061
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.4( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=31'39 mlcod 0'0 peering m=4 mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.4( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=31'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 48 pg[6.4( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=31'39 mlcod 0'0 peering m=4 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 1769472 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 48 handle_osd_map epochs [48,49], i have 48, src has [1,49]
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 48 handle_osd_map epochs [49,49], i have 49, src has [1,49]
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.c( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=31'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.007568 2 0.000073
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.c( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=31'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.008849 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.c( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=31'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.d( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48) [0] r=-1 lpr=48 pi=[41,48)/1 crt=31'39 unknown NOTIFY mbc={}] exit Started/Stray 1.017124 6 0.000095
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.d( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48) [0] r=-1 lpr=48 pi=[41,48)/1 crt=31'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.d( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48) [0] r=-1 lpr=48 pi=[41,48)/1 crt=31'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.c( v 31'39 lc 31'17 (0'0,31'39] local-lis/les=47/49 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=31'39 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.5( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48) [0] r=-1 lpr=48 pi=[41,48)/1 crt=31'39 unknown NOTIFY mbc={}] exit Started/Stray 1.017222 6 0.000126
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.5( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48) [0] r=-1 lpr=48 pi=[41,48)/1 crt=31'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.5( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48) [0] r=-1 lpr=48 pi=[41,48)/1 crt=31'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.4( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=31'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.007055 2 0.000107
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.4( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=31'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering 1.008180 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.4( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=37/38 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=31'39 mlcod 0'0 unknown m=4 mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.4( v 31'39 lc 31'15 (0'0,31'39] local-lis/les=47/49 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=31'39 lcod 0'0 mlcod 0'0 activating+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.4( v 31'39 lc 31'15 (0'0,31'39] local-lis/les=47/49 n=2 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.c( v 31'39 lc 31'17 (0'0,31'39] local-lis/les=47/49 n=1 ec=37/21 lis/c=37/37 les/c/f=38/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.c( v 31'39 lc 31'17 (0'0,31'39] local-lis/les=47/49 n=1 ec=37/21 lis/c=47/37 les/c/f=49/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.005150 4 0.000241
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.c( v 31'39 lc 31'17 (0'0,31'39] local-lis/les=47/49 n=1 ec=37/21 lis/c=47/37 les/c/f=49/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.4( v 31'39 lc 31'15 (0'0,31'39] local-lis/les=47/49 n=2 ec=37/21 lis/c=47/37 les/c/f=49/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.005203 4 0.000919
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.4( v 31'39 lc 31'15 (0'0,31'39] local-lis/les=47/49 n=2 ec=37/21 lis/c=47/37 les/c/f=49/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.c( v 31'39 lc 31'17 (0'0,31'39] local-lis/les=47/49 n=1 ec=37/21 lis/c=47/37 les/c/f=49/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000653 1 0.000329
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.c( v 31'39 lc 31'17 (0'0,31'39] local-lis/les=47/49 n=1 ec=37/21 lis/c=47/37 les/c/f=49/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.c( v 31'39 lc 31'17 (0'0,31'39] local-lis/les=47/49 n=1 ec=37/21 lis/c=47/37 les/c/f=49/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000011 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.c( v 31'39 lc 31'17 (0'0,31'39] local-lis/les=47/49 n=1 ec=37/21 lis/c=47/37 les/c/f=49/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 49 heartbeat osd_stat(store_statfs(0x4fe0d7000/0x0/0x4ffc00000, data 0xa4b5f/0xf3000, compress 0x0/0x0/0x0, omap 0x87b7, meta 0x1a27849), peers [0,2] op hist [0,0,0,0,0,1])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.c( v 31'39 (0'0,31'39] local-lis/les=47/49 n=1 ec=37/21 lis/c=47/37 les/c/f=49/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.070162 2 0.000078
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.c( v 31'39 (0'0,31'39] local-lis/les=47/49 n=1 ec=37/21 lis/c=47/37 les/c/f=49/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.c( v 31'39 (0'0,31'39] local-lis/les=47/49 n=1 ec=37/21 lis/c=47/37 les/c/f=49/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000023 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.c( v 31'39 (0'0,31'39] local-lis/les=47/49 n=1 ec=37/21 lis/c=47/37 les/c/f=49/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.4( v 31'39 lc 31'15 (0'0,31'39] local-lis/les=47/49 n=2 ec=37/21 lis/c=47/37 les/c/f=49/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.070683 2 0.000130
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.4( v 31'39 lc 31'15 (0'0,31'39] local-lis/les=47/49 n=2 ec=37/21 lis/c=47/37 les/c/f=49/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.4( v 31'39 lc 31'15 (0'0,31'39] local-lis/les=47/49 n=2 ec=37/21 lis/c=47/37 les/c/f=49/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000021 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.4( v 31'39 lc 31'15 (0'0,31'39] local-lis/les=47/49 n=2 ec=37/21 lis/c=47/37 les/c/f=49/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.d( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48) [0] r=-1 lpr=48 pi=[41,48)/1 pct=0'0 crt=31'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.081930 3 0.000083
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.d( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48) [0] r=-1 lpr=48 pi=[41,48)/1 pct=0'0 crt=31'39 active mbc={}] exit Started/ReplicaActive 0.082007 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.d( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48) [0] r=-1 lpr=48 pi=[41,48)/1 pct=0'0 crt=31'39 active mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.d( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48) [0] r=-1 lpr=48 pi=[41,48)/1 pct=0'0 crt=31'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.5( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48) [0] r=-1 lpr=48 pi=[41,48)/1 pct=0'0 crt=31'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.222372 3 0.000114
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.5( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48) [0] r=-1 lpr=48 pi=[41,48)/1 pct=0'0 crt=31'39 active mbc={}] exit Started/ReplicaActive 0.222425 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.5( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48) [0] r=-1 lpr=48 pi=[41,48)/1 pct=0'0 crt=31'39 active mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.5( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48) [0] r=-1 lpr=48 pi=[41,48)/1 pct=0'0 crt=31'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.4( v 31'39 (0'0,31'39] local-lis/les=47/49 n=2 ec=37/21 lis/c=47/37 les/c/f=49/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.277125 1 0.000132
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.4( v 31'39 (0'0,31'39] local-lis/les=47/49 n=2 ec=37/21 lis/c=47/37 les/c/f=49/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.4( v 31'39 (0'0,31'39] local-lis/les=47/49 n=2 ec=37/21 lis/c=47/37 les/c/f=49/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.4( v 31'39 (0'0,31'39] local-lis/les=47/49 n=2 ec=37/21 lis/c=47/37 les/c/f=49/38/0 sis=47) [1] r=0 lpr=48 pi=[37,47)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.d( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48) [0] r=-1 lpr=48 pi=[41,48)/1 pct=0'0 crt=31'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.271493 1 0.000155
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.d( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48) [0] r=-1 lpr=48 pi=[41,48)/1 pct=0'0 crt=31'39 active mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.5( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48) [0] r=-1 lpr=48 pi=[41,48)/1 pct=0'0 crt=31'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.131538 1 0.000127
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.5( v 31'39 (0'0,31'39] local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48) [0] r=-1 lpr=48 pi=[41,48)/1 pct=0'0 crt=31'39 active mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.5( v 31'39 (0'0,31'39] lb MIN local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48) [0] r=-1 lpr=48 DELETING pi=[41,48)/1 pct=0'0 crt=31'39 active mbc={}] exit Started/ToDelete/Deleting 0.029365 2 0.000273
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.5( v 31'39 (0'0,31'39] lb MIN local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48) [0] r=-1 lpr=48 pi=[41,48)/1 pct=0'0 crt=31'39 active mbc={}] exit Started/ToDelete 0.161008 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.5( v 31'39 (0'0,31'39] lb MIN local-lis/les=41/42 n=2 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48) [0] r=-1 lpr=48 pi=[41,48)/1 pct=0'0 crt=31'39 active mbc={}] exit Started 1.400818 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.d( v 31'39 (0'0,31'39] lb MIN local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48) [0] r=-1 lpr=48 DELETING pi=[41,48)/1 pct=0'0 crt=31'39 active mbc={}] exit Started/ToDelete/Deleting 0.044731 2 0.000684
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.d( v 31'39 (0'0,31'39] lb MIN local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48) [0] r=-1 lpr=48 pi=[41,48)/1 pct=0'0 crt=31'39 active mbc={}] exit Started/ToDelete 0.316364 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 49 pg[6.d( v 31'39 (0'0,31'39] lb MIN local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=48) [0] r=-1 lpr=48 pi=[41,48)/1 pct=0'0 crt=31'39 active mbc={}] exit Started 1.415595 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 49 heartbeat osd_stat(store_statfs(0x4fe0d7000/0x0/0x4ffc00000, data 0xa4b5f/0xf3000, compress 0x0/0x0/0x0, omap 0x87b7, meta 0x1a27849), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 64389120 unmapped: 614400 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 379561 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 64389120 unmapped: 614400 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.780271530s of 10.901998520s, submitted: 64
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 64413696 unmapped: 589824 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 64413696 unmapped: 589824 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 64413696 unmapped: 589824 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 64421888 unmapped: 581632 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 49 handle_osd_map epochs [50,51], i have 49, src has [1,51]
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 51 heartbeat osd_stat(store_statfs(0x4fe0d5000/0x0/0x4ffc00000, data 0xa659d/0xf7000, compress 0x0/0x0/0x0, omap 0x8ce3, meta 0x1a2731d), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 51 handle_osd_map epochs [52,52], i have 51, src has [1,52]
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 51 handle_osd_map epochs [52,52], i have 52, src has [1,52]
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 391474 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65527808 unmapped: 524288 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65552384 unmapped: 499712 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 52 heartbeat osd_stat(store_statfs(0x4fe0c8000/0x0/0x4ffc00000, data 0xaa7df/0x100000, compress 0x0/0x0/0x0, omap 0x922d, meta 0x1a26dd3), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 52 handle_osd_map epochs [53,53], i have 52, src has [1,53]
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 52 handle_osd_map epochs [53,53], i have 53, src has [1,53]
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 53 pg[6.9( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=41) [1] r=0 lpr=41 crt=31'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 22.910079 31 0.000113
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 53 pg[6.9( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=41) [1] r=0 lpr=41 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 22.916168 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 53 pg[6.9( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=41) [1] r=0 lpr=41 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 23.927995 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 53 pg[6.9( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=41) [1] r=0 lpr=41 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 23.928020 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 53 pg[6.9( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=41) [1] r=0 lpr=41 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 53 pg[6.9( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.090121269s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=31'39 lcod 0'0 active pruub 116.124732971s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 53 pg[6.9( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.089967728s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=31'39 lcod 0'0 unknown NOTIFY pruub 116.124732971s@ mbc={}] exit Reset 0.000245 1 0.000320
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 53 pg[6.9( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.089967728s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=31'39 lcod 0'0 unknown NOTIFY pruub 116.124732971s@ mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 53 pg[6.9( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.089967728s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=31'39 lcod 0'0 unknown NOTIFY pruub 116.124732971s@ mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 53 pg[6.9( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.089967728s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=31'39 lcod 0'0 unknown NOTIFY pruub 116.124732971s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 53 pg[6.9( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.089967728s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=31'39 lcod 0'0 unknown NOTIFY pruub 116.124732971s@ mbc={}] exit Start 0.000138 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 53 pg[6.9( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.089967728s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=31'39 lcod 0'0 unknown NOTIFY pruub 116.124732971s@ mbc={}] enter Started/Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 53 handle_osd_map epochs [53,54], i have 53, src has [1,54]
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 54 pg[6.a( v 31'39 (0'0,31'39] local-lis/les=43/44 n=1 ec=37/21 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=31'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 21.257866 28 0.000187
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 54 pg[6.a( v 31'39 (0'0,31'39] local-lis/les=43/44 n=1 ec=37/21 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 21.260673 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 54 pg[6.a( v 31'39 (0'0,31'39] local-lis/les=43/44 n=1 ec=37/21 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 22.185658 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 54 pg[6.a( v 31'39 (0'0,31'39] local-lis/les=43/44 n=1 ec=37/21 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 22.185691 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 54 pg[6.a( v 31'39 (0'0,31'39] local-lis/les=43/44 n=1 ec=37/21 lis/c=43/43 les/c/f=44/44/0 sis=43) [1] r=0 lpr=43 crt=31'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 54 pg[6.a( v 31'39 (0'0,31'39] local-lis/les=43/44 n=1 ec=37/21 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.742419243s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=31'39 lcod 0'0 active pruub 118.168533325s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 54 pg[6.a( v 31'39 (0'0,31'39] local-lis/les=43/44 n=1 ec=37/21 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.742380142s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=31'39 lcod 0'0 unknown NOTIFY pruub 118.168533325s@ mbc={}] exit Reset 0.000070 1 0.000120
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 54 pg[6.a( v 31'39 (0'0,31'39] local-lis/les=43/44 n=1 ec=37/21 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.742380142s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=31'39 lcod 0'0 unknown NOTIFY pruub 118.168533325s@ mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 54 pg[6.a( v 31'39 (0'0,31'39] local-lis/les=43/44 n=1 ec=37/21 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.742380142s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=31'39 lcod 0'0 unknown NOTIFY pruub 118.168533325s@ mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 54 pg[6.a( v 31'39 (0'0,31'39] local-lis/les=43/44 n=1 ec=37/21 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.742380142s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=31'39 lcod 0'0 unknown NOTIFY pruub 118.168533325s@ mbc={}] state<Start>: transitioning to Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 54 pg[6.a( v 31'39 (0'0,31'39] local-lis/les=43/44 n=1 ec=37/21 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.742380142s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=31'39 lcod 0'0 unknown NOTIFY pruub 118.168533325s@ mbc={}] exit Start 0.000009 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 54 pg[6.a( v 31'39 (0'0,31'39] local-lis/les=43/44 n=1 ec=37/21 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.742380142s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=31'39 lcod 0'0 unknown NOTIFY pruub 118.168533325s@ mbc={}] enter Started/Stray
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 54 handle_osd_map epochs [54,54], i have 54, src has [1,54]
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 54 pg[6.9( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=31'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.398361 7 0.000300
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 54 pg[6.9( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=31'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 54 pg[6.9( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=31'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 54 pg[6.9( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=31'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000064 1 0.000092
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 54 pg[6.9( v 31'39 (0'0,31'39] local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=31'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 54 pg[6.9( v 31'39 (0'0,31'39] lb MIN local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=31'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.002234 1 0.000044
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 54 pg[6.9( v 31'39 (0'0,31'39] lb MIN local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=31'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.002369 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 54 pg[6.9( v 31'39 (0'0,31'39] lb MIN local-lis/les=41/42 n=1 ec=37/21 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=31'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.400984 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65634304 unmapped: 417792 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65724416 unmapped: 327680 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 54 handle_osd_map epochs [55,55], i have 54, src has [1,55]
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 55 pg[6.a( v 31'39 (0'0,31'39] local-lis/les=43/44 n=1 ec=37/21 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=-1 lpr=54 pi=[43,54)/1 crt=31'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.442408 6 0.000095
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 55 pg[6.a( v 31'39 (0'0,31'39] local-lis/les=43/44 n=1 ec=37/21 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=-1 lpr=54 pi=[43,54)/1 crt=31'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 55 pg[6.a( v 31'39 (0'0,31'39] local-lis/les=43/44 n=1 ec=37/21 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=-1 lpr=54 pi=[43,54)/1 crt=31'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 55 pg[6.a( v 31'39 (0'0,31'39] local-lis/les=43/44 n=1 ec=37/21 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=-1 lpr=54 pi=[43,54)/1 crt=31'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000096 1 0.000046
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 55 pg[6.a( v 31'39 (0'0,31'39] local-lis/les=43/44 n=1 ec=37/21 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=-1 lpr=54 pi=[43,54)/1 crt=31'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 55 pg[6.a( v 31'39 (0'0,31'39] lb MIN local-lis/les=43/44 n=1 ec=37/21 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[43,54)/1 crt=31'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.002435 2 0.000033
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 55 pg[6.a( v 31'39 (0'0,31'39] lb MIN local-lis/les=43/44 n=1 ec=37/21 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=-1 lpr=54 pi=[43,54)/1 crt=31'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.002569 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 55 pg[6.a( v 31'39 (0'0,31'39] lb MIN local-lis/les=43/44 n=1 ec=37/21 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=-1 lpr=54 pi=[43,54)/1 crt=31'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.445023 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 270336 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 55 handle_osd_map epochs [56,56], i have 55, src has [1,56]
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 56 pg[6.b(unlocked)] enter Initial
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 56 pg[6.b( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=0 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000108 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 56 pg[6.b( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=0 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 56 pg[6.b( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000040
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 56 pg[6.b( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 56 pg[6.b( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 56 pg[6.b( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 56 pg[6.b( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 56 pg[6.b( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 56 pg[6.b( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 56 pg[6.b( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 56 pg[6.b( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000177 1 0.000054
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 56 pg[6.b( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 56 pg[6.b( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=45/46 n=1 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=31'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.001527 2 0.000049
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 56 pg[6.b( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=45/46 n=1 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=31'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 56 pg[6.b( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=45/46 n=1 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=31'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000041 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 56 pg[6.b( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=45/46 n=1 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=31'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 405546 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 56 handle_osd_map epochs [56,57], i have 56, src has [1,57]
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 57 pg[6.b( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=45/46 n=1 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=31'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.523468 2 0.000303
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 57 pg[6.b( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=45/46 n=1 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=31'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.525418 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 57 pg[6.b( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=45/46 n=1 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=31'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 57 pg[6.b( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=56/57 n=1 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=31'39 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 57 pg[6.b( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=56/57 n=1 ec=37/21 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=31'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 57 pg[6.b( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=56/57 n=1 ec=37/21 lis/c=56/45 les/c/f=57/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=31'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.003178 3 0.000738
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 57 pg[6.b( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=56/57 n=1 ec=37/21 lis/c=56/45 les/c/f=57/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=31'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 57 pg[6.b( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=56/57 n=1 ec=37/21 lis/c=56/45 les/c/f=57/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=31'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000193 1 0.000095
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 57 pg[6.b( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=56/57 n=1 ec=37/21 lis/c=56/45 les/c/f=57/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=31'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 57 pg[6.b( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=56/57 n=1 ec=37/21 lis/c=56/45 les/c/f=57/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=31'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000054 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 57 pg[6.b( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=56/57 n=1 ec=37/21 lis/c=56/45 les/c/f=57/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=31'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 57 pg[6.b( v 31'39 (0'0,31'39] local-lis/les=56/57 n=1 ec=37/21 lis/c=56/45 les/c/f=57/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.008014 3 0.000251
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 57 pg[6.b( v 31'39 (0'0,31'39] local-lis/les=56/57 n=1 ec=37/21 lis/c=56/45 les/c/f=57/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 57 pg[6.b( v 31'39 (0'0,31'39] local-lis/les=56/57 n=1 ec=37/21 lis/c=56/45 les/c/f=57/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000045 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 57 pg[6.b( v 31'39 (0'0,31'39] local-lis/les=56/57 n=1 ec=37/21 lis/c=56/45 les/c/f=57/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65806336 unmapped: 245760 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65839104 unmapped: 212992 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 57 heartbeat osd_stat(store_statfs(0x4fe0b7000/0x0/0x4ffc00000, data 0xb1331/0x10f000, compress 0x0/0x0/0x0, omap 0x9f2d, meta 0x1a260d3), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 57 handle_osd_map epochs [58,58], i have 57, src has [1,58]
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.751835823s of 10.814554214s, submitted: 28
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 58 heartbeat osd_stat(store_statfs(0x4fe0b2000/0x0/0x4ffc00000, data 0xb2947/0x112000, compress 0x0/0x0/0x0, omap 0xa1a0, meta 0x1a25e60), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65658880 unmapped: 393216 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 58 heartbeat osd_stat(store_statfs(0x4fe0b8000/0x0/0x4ffc00000, data 0xb2947/0x112000, compress 0x0/0x0/0x0, omap 0xa1a0, meta 0x1a25e60), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65658880 unmapped: 393216 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65667072 unmapped: 385024 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 58 handle_osd_map epochs [59,59], i have 58, src has [1,59]
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 59 pg[6.d(unlocked)] enter Initial
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 59 pg[6.d( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=0 pi=[48,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000200 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 59 pg[6.d( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=0 pi=[48,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 59 pg[6.d( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000041 1 0.000090
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 59 pg[6.d( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 59 pg[6.d( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 59 pg[6.d( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 59 pg[6.d( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000343 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 59 pg[6.d( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 59 pg[6.d( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 59 pg[6.d( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 59 pg[6.d( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000334 1 0.000568
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 59 pg[6.d( empty local-lis/les=0/0 n=0 ec=37/21 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 59 pg[6.d( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=48/49 n=1 ec=37/21 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=31'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.001448 2 0.000134
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 59 pg[6.d( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=48/49 n=1 ec=37/21 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=31'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 59 pg[6.d( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=48/49 n=1 ec=37/21 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=31'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000023 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 59 pg[6.d( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=48/49 n=1 ec=37/21 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=31'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.b scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.b scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 423563 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65691648 unmapped: 1409024 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 59 handle_osd_map epochs [59,60], i have 59, src has [1,60]
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 59 handle_osd_map epochs [59,60], i have 60, src has [1,60]
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 60 pg[6.d( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=48/49 n=1 ec=37/21 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=31'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.903401 2 0.000196
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 60 pg[6.d( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=48/49 n=1 ec=37/21 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=31'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 0.905379 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 60 pg[6.d( v 31'39 lc 0'0 (0'0,31'39] local-lis/les=48/49 n=1 ec=37/21 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=31'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 60 pg[6.d( v 31'39 lc 31'13 (0'0,31'39] local-lis/les=59/60 n=1 ec=37/21 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=31'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 60 pg[6.d( v 31'39 lc 31'13 (0'0,31'39] local-lis/les=59/60 n=1 ec=37/21 lis/c=48/48 les/c/f=49/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 60 pg[6.d( v 31'39 lc 31'13 (0'0,31'39] local-lis/les=59/60 n=1 ec=37/21 lis/c=59/48 les/c/f=60/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.002931 3 0.000214
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 60 pg[6.d( v 31'39 lc 31'13 (0'0,31'39] local-lis/les=59/60 n=1 ec=37/21 lis/c=59/48 les/c/f=60/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 60 pg[6.d( v 31'39 lc 31'13 (0'0,31'39] local-lis/les=59/60 n=1 ec=37/21 lis/c=59/48 les/c/f=60/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000102 1 0.000052
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 60 pg[6.d( v 31'39 lc 31'13 (0'0,31'39] local-lis/les=59/60 n=1 ec=37/21 lis/c=59/48 les/c/f=60/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 60 pg[6.d( v 31'39 lc 31'13 (0'0,31'39] local-lis/les=59/60 n=1 ec=37/21 lis/c=59/48 les/c/f=60/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000036 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 60 pg[6.d( v 31'39 lc 31'13 (0'0,31'39] local-lis/les=59/60 n=1 ec=37/21 lis/c=59/48 les/c/f=60/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=31'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 60 pg[6.d( v 31'39 (0'0,31'39] local-lis/les=59/60 n=1 ec=37/21 lis/c=59/48 les/c/f=60/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.068030 3 0.000195
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 60 pg[6.d( v 31'39 (0'0,31'39] local-lis/les=59/60 n=1 ec=37/21 lis/c=59/48 les/c/f=60/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 60 pg[6.d( v 31'39 (0'0,31'39] local-lis/les=59/60 n=1 ec=37/21 lis/c=59/48 les/c/f=60/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000023 0 0.000000
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 pg_epoch: 60 pg[6.d( v 31'39 (0'0,31'39] local-lis/les=59/60 n=1 ec=37/21 lis/c=59/48 les/c/f=60/49/0 sis=59) [1] r=0 lpr=59 pi=[48,59)/1 crt=31'39 mlcod 31'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 60 heartbeat osd_stat(store_statfs(0x4fe0b5000/0x0/0x4ffc00000, data 0xb3f5d/0x115000, compress 0x0/0x0/0x0, omap 0xa417, meta 0x1a25be9), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.d scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.d scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65716224 unmapped: 1384448 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65716224 unmapped: 1384448 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.b scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.b scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 60 handle_osd_map epochs [60,61], i have 60, src has [1,61]
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65724416 unmapped: 1376256 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 61 heartbeat osd_stat(store_statfs(0x4fe0ae000/0x0/0x4ffc00000, data 0xb6bd7/0x11c000, compress 0x0/0x0/0x0, omap 0xaaa5, meta 0x1a2555b), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65732608 unmapped: 1368064 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 438758 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 61 handle_osd_map epochs [62,62], i have 61, src has [1,62]
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65748992 unmapped: 1351680 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65748992 unmapped: 1351680 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65748992 unmapped: 1351680 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65757184 unmapped: 1343488 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ab000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ab000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65757184 unmapped: 1343488 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ab000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 442250 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 1335296 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ab000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 1335296 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65765376 unmapped: 1335296 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ab000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ab000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 1318912 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65781760 unmapped: 1318912 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.895381927s of 17.993841171s, submitted: 24
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ab000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 443941 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65798144 unmapped: 1302528 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65806336 unmapped: 1294336 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65806336 unmapped: 1294336 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 1286144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65814528 unmapped: 1286144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 446352 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65822720 unmapped: 1277952 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65830912 unmapped: 1269760 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65830912 unmapped: 1269760 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65839104 unmapped: 1261568 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65839104 unmapped: 1261568 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.989287376s of 10.003664970s, submitted: 5
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 448763 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65863680 unmapped: 1236992 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65863680 unmapped: 1236992 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65888256 unmapped: 1212416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65888256 unmapped: 1212416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65888256 unmapped: 1212416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 451174 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65896448 unmapped: 1204224 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65896448 unmapped: 1204224 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65896448 unmapped: 1204224 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65904640 unmapped: 1196032 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65904640 unmapped: 1196032 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 451174 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65912832 unmapped: 1187840 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.d scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.963445663s of 10.970324516s, submitted: 3
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.d scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65904640 unmapped: 1196032 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65912832 unmapped: 1187840 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65945600 unmapped: 1155072 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65945600 unmapped: 1155072 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 455998 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 1146880 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 1146880 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 1146880 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 65978368 unmapped: 1122304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.d scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.d scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 1097728 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 460822 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 1089536 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 1089536 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 1089536 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.175627708s of 12.195529938s, submitted: 8
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 1081344 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66019328 unmapped: 1081344 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 463233 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 1073152 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66027520 unmapped: 1073152 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 1064960 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 1064960 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66035712 unmapped: 1064960 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 468059 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66043904 unmapped: 1056768 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66043904 unmapped: 1056768 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 1048576 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 1048576 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 1040384 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 468059 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66068480 unmapped: 1032192 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 1024000 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.061655045s of 14.075113297s, submitted: 6
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66084864 unmapped: 1015808 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66084864 unmapped: 1015808 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66093056 unmapped: 1007616 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 472885 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66101248 unmapped: 999424 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66101248 unmapped: 999424 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66109440 unmapped: 991232 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66109440 unmapped: 991232 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66117632 unmapped: 983040 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 477709 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66125824 unmapped: 974848 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66125824 unmapped: 974848 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66134016 unmapped: 966656 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66134016 unmapped: 966656 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.014001846s of 12.029939651s, submitted: 8
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 942080 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 480122 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 942080 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 942080 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.f scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.f scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 925696 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 917504 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 917504 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 484946 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 909312 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 909312 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 901120 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 892928 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 884736 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 487359 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 884736 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.013473511s of 12.029477119s, submitted: 8
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 884736 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66224128 unmapped: 876544 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66232320 unmapped: 868352 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66240512 unmapped: 860160 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 492183 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 851968 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66256896 unmapped: 843776 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 835584 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 835584 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.d scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.d scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 811008 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 497007 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 811008 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 811008 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 802816 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.861497879s of 11.878818512s, submitted: 8
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 802816 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 794624 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501829 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 794624 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 794624 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 786432 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 786432 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 770048 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501829 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 761856 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 753664 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 753664 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 753664 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.a scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.959852219s of 11.973832130s, submitted: 6
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.a scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 506651 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66404352 unmapped: 696320 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 506651 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 688128 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 679936 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 679936 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 671744 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 663552 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 506651 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 663552 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.087844849s of 12.090794563s, submitted: 2
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 647168 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 647168 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 638976 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 638976 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 509062 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 638976 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 630784 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 622592 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 598016 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 589824 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 518706 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 581632 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 573440 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.991565704s of 11.008629799s, submitted: 10
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 565248 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.f scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.f scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 557056 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 557056 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 523528 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 548864 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 532480 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 532480 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.c scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.c scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 524288 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 524288 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528352 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 524288 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 491520 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 491520 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.000996590s of 11.020680428s, submitted: 10
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 483328 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 483328 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 535589 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 483328 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 466944 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 466944 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 458752 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 458752 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 540413 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 450560 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 442368 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.b scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.b scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 425984 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 409600 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.957039833s of 10.982520103s, submitted: 12
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 409600 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 550057 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 409600 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.d scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.d scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 401408 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.c scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.c scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 401408 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 393216 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 393216 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 554879 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 450560 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.e scrub starts
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.e scrub ok
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 442368 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 442368 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 442368 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 434176 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 434176 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 425984 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 425984 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 425984 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 417792 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 417792 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 409600 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 409600 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 409600 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 401408 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 401408 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 401408 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 393216 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 393216 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 385024 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 385024 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 385024 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 376832 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 376832 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 368640 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 376832 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 368640 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 368640 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 368640 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 360448 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 360448 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 352256 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 344064 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 344064 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 344064 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 319488 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 319488 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 319488 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 311296 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 311296 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 303104 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 303104 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 294912 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 286720 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 286720 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 278528 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 278528 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 270336 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 270336 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 270336 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 262144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 262144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 262144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 253952 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 253952 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 229376 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 229376 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 229376 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 221184 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 221184 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 221184 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 212992 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 212992 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 204800 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 204800 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 204800 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 196608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 196608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 196608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 188416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 188416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 188416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 180224 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 180224 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 172032 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 155648 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 155648 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 147456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 147456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 147456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 139264 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 139264 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 131072 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 131072 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 131072 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 122880 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 122880 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 114688 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 114688 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 106496 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 106496 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 106496 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 106496 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 98304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 98304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 65536 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 65536 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 57344 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 57344 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 57344 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 49152 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 49152 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 49152 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 40960 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 40960 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 32768 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 32768 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 32768 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 24576 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 24576 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 16384 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 16384 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 8192 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 8192 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 8192 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 0 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 0 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 0 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 1040384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 1040384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 1040384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 1032192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 1032192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 1024000 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 1024000 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 1024000 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 1015808 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 1015808 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 1007616 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 1007616 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 999424 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 991232 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 991232 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 983040 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 983040 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 983040 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 974848 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 974848 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 966656 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 966656 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 966656 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 925696 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 925696 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 917504 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 917504 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 917504 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 909312 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 909312 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 901120 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 901120 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 901120 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67256320 unmapped: 892928 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67256320 unmapped: 892928 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 884736 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 884736 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 884736 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67272704 unmapped: 876544 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67272704 unmapped: 876544 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67272704 unmapped: 876544 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 868352 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 868352 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 860160 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 860160 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 851968 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 851968 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 851968 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 843776 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 843776 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 843776 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 835584 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 835584 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 827392 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 827392 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 827392 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 819200 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 819200 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 811008 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 811008 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 802816 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 802816 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 802816 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 794624 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 794624 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 786432 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 786432 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 786432 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 778240 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 778240 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 770048 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 770048 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 770048 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 770048 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 761856 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 761856 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 745472 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 745472 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 737280 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 737280 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 737280 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 729088 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 729088 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 729088 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67428352 unmapped: 720896 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67428352 unmapped: 720896 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 712704 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 712704 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 712704 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67444736 unmapped: 704512 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67444736 unmapped: 704512 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 696320 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 696320 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 696320 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 688128 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 688128 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 688128 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 679936 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 679936 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 671744 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 671744 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 638976 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 638976 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 638976 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 630784 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 630784 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 630784 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 598016 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 598016 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 598016 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 598016 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 589824 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 589824 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 581632 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 581632 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67575808 unmapped: 573440 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67575808 unmapped: 573440 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67584000 unmapped: 565248 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67584000 unmapped: 565248 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67584000 unmapped: 565248 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 557056 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 557056 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 548864 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 548864 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 548864 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 540672 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 540672 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 540672 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 532480 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 540672 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 532480 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 532480 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 532480 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 524288 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 524288 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 516096 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 516096 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 507904 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 507904 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 507904 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67649536 unmapped: 499712 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67649536 unmapped: 499712 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67649536 unmapped: 499712 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 491520 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 491520 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 483328 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 483328 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 475136 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 475136 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 475136 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 466944 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 466944 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 458752 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 458752 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 458752 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 450560 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 450560 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 450560 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 442368 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 442368 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67715072 unmapped: 434176 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67715072 unmapped: 434176 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67715072 unmapped: 434176 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 425984 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 425984 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 425984 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 417792 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 417792 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67739648 unmapped: 409600 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67739648 unmapped: 409600 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67739648 unmapped: 409600 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67747840 unmapped: 401408 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67747840 unmapped: 401408 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 393216 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 393216 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 393216 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67764224 unmapped: 385024 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67764224 unmapped: 385024 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67772416 unmapped: 376832 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67772416 unmapped: 376832 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67780608 unmapped: 368640 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 4515 writes, 20K keys, 4515 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 4515 writes, 505 syncs, 8.94 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4515 writes, 20K keys, 4515 commit groups, 1.0 writes per commit group, ingest: 16.57 MB, 0.03 MB/s#012Interval WAL: 4515 writes, 505 syncs, 8.94 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cf1bed58d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cf1bed58d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67862528 unmapped: 286720 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 278528 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 270336 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 270336 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 270336 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 262144 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 262144 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 253952 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 253952 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 253952 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 245760 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 245760 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 237568 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 237568 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 237568 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 229376 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 229376 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 221184 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 221184 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 221184 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 212992 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 212992 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 204800 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 204800 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 204800 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 196608 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 196608 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 196608 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 188416 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 188416 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 180224 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 180224 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 180224 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 172032 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 172032 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 163840 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 163840 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 163840 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 155648 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 155648 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 147456 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 147456 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 139264 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 139264 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 131072 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 131072 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 131072 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 122880 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 122880 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 122880 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 114688 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 114688 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 114688 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 106496 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 106496 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 106496 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 98304 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 98304 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 90112 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 90112 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 81920 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 81920 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 81920 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 73728 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 73728 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 65536 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 65536 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 65536 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 57344 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 57344 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 49152 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 49152 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 49152 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 40960 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 40960 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 32768 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 32768 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 32768 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 24576 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 24576 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 24576 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 16384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 16384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 16384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 8192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 8192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 0 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 0 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 1040384 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 1040384 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 1040384 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 1032192 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 1032192 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 1024000 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 1024000 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 1024000 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 1015808 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 1015808 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 1015808 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 1007616 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 1007616 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 999424 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 999424 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 999424 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 991232 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 991232 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 983040 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 983040 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68222976 unmapped: 974848 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68222976 unmapped: 974848 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 958464 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 958464 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 958464 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 950272 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 950272 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 950272 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 942080 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 942080 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 942080 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:34:42 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:37:56 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v988: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:37:56 np0005544708 rsyslogd[1006]: imjournal: 15395 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec  3 16:37:56 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:37:58 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v989: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:37:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  3 16:37:59 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3938407216' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec  3 16:37:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  3 16:37:59 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3938407216' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec  3 16:38:00 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v990: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:38:00 np0005544708 ceph-mon[75204]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Dec  3 16:38:00 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:38:00.484139) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 16:38:00 np0005544708 ceph-mon[75204]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Dec  3 16:38:00 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797880484169, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1268, "num_deletes": 251, "total_data_size": 1308922, "memory_usage": 1334464, "flush_reason": "Manual Compaction"}
Dec  3 16:38:00 np0005544708 ceph-mon[75204]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Dec  3 16:38:00 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797880495288, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1280505, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19351, "largest_seqno": 20618, "table_properties": {"data_size": 1274542, "index_size": 3294, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12605, "raw_average_key_size": 19, "raw_value_size": 1262502, "raw_average_value_size": 1975, "num_data_blocks": 151, "num_entries": 639, "num_filter_entries": 639, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764797751, "oldest_key_time": 1764797751, "file_creation_time": 1764797880, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d83d641b-0db7-44b5-9540-349f4c36f664", "db_session_id": "YRQHTOJ9E78VAMDNI6U1", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Dec  3 16:38:00 np0005544708 ceph-mon[75204]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 12049 microseconds, and 5610 cpu microseconds.
Dec  3 16:38:00 np0005544708 ceph-mon[75204]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 16:38:00 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:38:00.496153) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1280505 bytes OK
Dec  3 16:38:00 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:38:00.496233) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Dec  3 16:38:00 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:38:00.498195) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Dec  3 16:38:00 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:38:00.498237) EVENT_LOG_v1 {"time_micros": 1764797880498225, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 16:38:00 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:38:00.498265) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 16:38:00 np0005544708 ceph-mon[75204]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1303203, prev total WAL file size 1303203, number of live WAL files 2.
Dec  3 16:38:00 np0005544708 ceph-mon[75204]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 16:38:00 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:38:00.499306) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Dec  3 16:38:00 np0005544708 ceph-mon[75204]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 16:38:00 np0005544708 ceph-mon[75204]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1250KB)], [47(5625KB)]
Dec  3 16:38:00 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797880499369, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 7041482, "oldest_snapshot_seqno": -1}
Dec  3 16:38:00 np0005544708 ceph-mon[75204]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4038 keys, 5844169 bytes, temperature: kUnknown
Dec  3 16:38:00 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797880543749, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 5844169, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5814906, "index_size": 18081, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 97528, "raw_average_key_size": 24, "raw_value_size": 5740070, "raw_average_value_size": 1421, "num_data_blocks": 768, "num_entries": 4038, "num_filter_entries": 4038, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764796079, "oldest_key_time": 0, "file_creation_time": 1764797880, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d83d641b-0db7-44b5-9540-349f4c36f664", "db_session_id": "YRQHTOJ9E78VAMDNI6U1", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Dec  3 16:38:00 np0005544708 ceph-mon[75204]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 16:38:00 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:38:00.544203) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 5844169 bytes
Dec  3 16:38:00 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:38:00.545896) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 158.0 rd, 131.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 5.5 +0.0 blob) out(5.6 +0.0 blob), read-write-amplify(10.1) write-amplify(4.6) OK, records in: 4552, records dropped: 514 output_compression: NoCompression
Dec  3 16:38:00 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:38:00.545926) EVENT_LOG_v1 {"time_micros": 1764797880545910, "job": 24, "event": "compaction_finished", "compaction_time_micros": 44563, "compaction_time_cpu_micros": 26857, "output_level": 6, "num_output_files": 1, "total_output_size": 5844169, "num_input_records": 4552, "num_output_records": 4038, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 16:38:00 np0005544708 ceph-mon[75204]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 16:38:00 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797880546465, "job": 24, "event": "table_file_deletion", "file_number": 49}
Dec  3 16:38:00 np0005544708 ceph-mon[75204]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 16:38:00 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764797880548198, "job": 24, "event": "table_file_deletion", "file_number": 47}
Dec  3 16:38:00 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:38:00.499168) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:38:00 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:38:00.548298) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:38:00 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:38:00.548305) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:38:00 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:38:00.548309) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:38:00 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:38:00.548314) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:38:00 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:38:00.548320) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:38:01 np0005544708 ceph-mon[75204]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 16:38:01 np0005544708 ceph-mon[75204]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 4617 writes, 20K keys, 4617 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4617 writes, 4617 syncs, 1.00 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1522 writes, 7118 keys, 1522 commit groups, 1.0 writes per commit group, ingest: 6.55 MB, 0.01 MB/s#012Interval WAL: 1522 writes, 1522 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     97.1      0.18              0.07        12    0.015       0      0       0.0       0.0#012  L6      1/0    5.57 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    128.8    106.4      0.53              0.21        11    0.049     42K   5814       0.0       0.0#012 Sum      1/0    5.57 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2     96.3    104.1      0.72              0.29        23    0.031     42K   5814       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.4     94.7     97.0      0.41              0.16        12    0.034     26K   3544       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    128.8    106.4      0.53              0.21        11    0.049     42K   5814       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     99.8      0.18              0.07        11    0.016       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.4      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.017, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.07 GB write, 0.04 MB/s write, 0.07 GB read, 0.04 MB/s read, 0.7 seconds#012Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.06 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56170d6e38d0#2 capacity: 308.00 MB usage: 6.89 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 9.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(689,6.54 MB,2.12229%) FilterBlock(24,124.17 KB,0.0393706%) IndexBlock(24,241.20 KB,0.0764772%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  3 16:38:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:38:02 np0005544708 podman[257517]: 2025-12-03 21:38:02.170659289 +0000 UTC m=+0.108987297 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec  3 16:38:02 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v991: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:38:04 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v992: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:38:06 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v993: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:38:06 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:38:08 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v994: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:38:10 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v995: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:38:11 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:38:12 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v996: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:38:14 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v997: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:38:16 np0005544708 podman[257545]: 2025-12-03 21:38:16.127972672 +0000 UTC m=+0.066470646 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec  3 16:38:16 np0005544708 podman[257544]: 2025-12-03 21:38:16.158762099 +0000 UTC m=+0.091937200 container health_status 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2)
Dec  3 16:38:16 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v998: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:38:16 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:38:18 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v999: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:38:20 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1000: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:38:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Optimize plan auto_2025-12-03_21:38:21
Dec  3 16:38:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 16:38:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] do_upmap
Dec  3 16:38:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'backups', 'images', 'volumes']
Dec  3 16:38:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] prepared 0/10 upmap changes
Dec  3 16:38:21 np0005544708 nova_compute[241566]: 2025-12-03 21:38:21.565 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:38:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:38:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:38:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:38:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:38:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:38:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:38:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:38:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 16:38:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:38:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 16:38:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:38:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:38:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:38:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:38:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:38:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:38:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:38:22 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1001: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:38:24 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1002: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:38:24 np0005544708 nova_compute[241566]: 2025-12-03 21:38:24.551 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:38:24 np0005544708 nova_compute[241566]: 2025-12-03 21:38:24.552 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 16:38:24 np0005544708 nova_compute[241566]: 2025-12-03 21:38:24.552 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 16:38:24 np0005544708 nova_compute[241566]: 2025-12-03 21:38:24.627 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 16:38:24 np0005544708 nova_compute[241566]: 2025-12-03 21:38:24.628 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:38:24 np0005544708 nova_compute[241566]: 2025-12-03 21:38:24.628 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:38:25 np0005544708 nova_compute[241566]: 2025-12-03 21:38:25.551 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:38:25 np0005544708 nova_compute[241566]: 2025-12-03 21:38:25.844 241570 DEBUG oslo_concurrency.processutils [None req-cf8f4e5e-8023-4cda-a194-a324473da3c2 1278db95002f4a508698b0c865809410 e82fda53634b410e910c801bc1b00db2 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:38:25 np0005544708 nova_compute[241566]: 2025-12-03 21:38:25.886 241570 DEBUG oslo_concurrency.processutils [None req-cf8f4e5e-8023-4cda-a194-a324473da3c2 1278db95002f4a508698b0c865809410 e82fda53634b410e910c801bc1b00db2 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:38:26 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1003: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:38:26 np0005544708 nova_compute[241566]: 2025-12-03 21:38:26.552 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:38:26 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:38:27 np0005544708 nova_compute[241566]: 2025-12-03 21:38:27.551 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:38:27 np0005544708 nova_compute[241566]: 2025-12-03 21:38:27.552 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 16:38:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 16:38:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:38:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 16:38:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:38:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 3.533351853729544e-07 of space, bias 1.0, pg target 0.00010600055561188632 quantized to 32 (current 32)
Dec  3 16:38:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:38:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.833582600959152e-08 of space, bias 1.0, pg target 2.9500747802877454e-05 quantized to 32 (current 32)
Dec  3 16:38:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:38:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:38:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:38:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006678894471709225 of space, bias 1.0, pg target 0.20036683415127676 quantized to 32 (current 32)
Dec  3 16:38:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:38:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.953496000112683e-07 of space, bias 4.0, pg target 0.0009544195200135219 quantized to 16 (current 16)
Dec  3 16:38:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:38:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:38:28 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1004: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:38:28 np0005544708 nova_compute[241566]: 2025-12-03 21:38:28.552 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:38:29 np0005544708 nova_compute[241566]: 2025-12-03 21:38:29.551 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:38:29 np0005544708 nova_compute[241566]: 2025-12-03 21:38:29.596 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:38:29 np0005544708 nova_compute[241566]: 2025-12-03 21:38:29.597 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:38:29 np0005544708 nova_compute[241566]: 2025-12-03 21:38:29.597 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:38:29 np0005544708 nova_compute[241566]: 2025-12-03 21:38:29.597 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 16:38:29 np0005544708 nova_compute[241566]: 2025-12-03 21:38:29.598 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:38:30 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  3 16:38:30 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2384620290' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  3 16:38:30 np0005544708 nova_compute[241566]: 2025-12-03 21:38:30.159 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:38:30 np0005544708 nova_compute[241566]: 2025-12-03 21:38:30.382 241570 WARNING nova.virt.libvirt.driver [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 16:38:30 np0005544708 nova_compute[241566]: 2025-12-03 21:38:30.384 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5155MB free_disk=59.988260054029524GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 16:38:30 np0005544708 nova_compute[241566]: 2025-12-03 21:38:30.385 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:38:30 np0005544708 nova_compute[241566]: 2025-12-03 21:38:30.385 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:38:30 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1005: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:38:30 np0005544708 nova_compute[241566]: 2025-12-03 21:38:30.478 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 16:38:30 np0005544708 nova_compute[241566]: 2025-12-03 21:38:30.478 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 16:38:30 np0005544708 nova_compute[241566]: 2025-12-03 21:38:30.493 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:38:31 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  3 16:38:31 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/199793592' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  3 16:38:31 np0005544708 nova_compute[241566]: 2025-12-03 21:38:31.081 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:38:31 np0005544708 nova_compute[241566]: 2025-12-03 21:38:31.087 241570 DEBUG nova.compute.provider_tree [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Inventory has not changed in ProviderTree for provider: 94aba67c-5c5e-45d0-83d1-33eb467c8775 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 16:38:31 np0005544708 nova_compute[241566]: 2025-12-03 21:38:31.105 241570 DEBUG nova.scheduler.client.report [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Inventory has not changed for provider 94aba67c-5c5e-45d0-83d1-33eb467c8775 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 16:38:31 np0005544708 nova_compute[241566]: 2025-12-03 21:38:31.107 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 16:38:31 np0005544708 nova_compute[241566]: 2025-12-03 21:38:31.107 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:38:31 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:38:32 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1006: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:38:33 np0005544708 podman[257628]: 2025-12-03 21:38:33.161327863 +0000 UTC m=+0.100155010 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  3 16:38:34 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:38:34.186 151937 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:b3:fa', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:85:3a:67:f5:74'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  3 16:38:34 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:38:34.188 151937 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  3 16:38:34 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1007: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:38:36 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1008: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:38:36 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:38:38 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1009: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:38:40 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1010: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:38:41 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:38:42 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:38:42.190 151937 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f27c01e7-5b62-4209-a664-3ae50b74644d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  3 16:38:42 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1011: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:38:44 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1012: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:38:46 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1013: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:38:46 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:38:47 np0005544708 podman[257655]: 2025-12-03 21:38:47.13452441 +0000 UTC m=+0.074587393 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 16:38:47 np0005544708 podman[257654]: 2025-12-03 21:38:47.143348108 +0000 UTC m=+0.082427885 container health_status 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 16:38:48 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1014: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:38:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:38:48.943 151937 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:38:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:38:48.944 151937 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:38:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:38:48.944 151937 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:38:50 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1015: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:38:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:38:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:38:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:38:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:38:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:38:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:38:51 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:38:52 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1016: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:38:54 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1017: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:38:55 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:38:55 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:38:55 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:38:55 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:38:55 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:38:55 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:38:55 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  3 16:38:55 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:38:55 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  3 16:38:55 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:38:55 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  3 16:38:55 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  3 16:38:55 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  3 16:38:55 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:38:55 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:38:55 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:38:55 np0005544708 podman[257907]: 2025-12-03 21:38:55.929153419 +0000 UTC m=+0.059915350 container create 0bef449bc320516987948b257f863cc6d5d2d4c05cb0d4b67f3be0aed8ef4a20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hertz, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:38:55 np0005544708 systemd[1]: Started libpod-conmon-0bef449bc320516987948b257f863cc6d5d2d4c05cb0d4b67f3be0aed8ef4a20.scope.
Dec  3 16:38:56 np0005544708 podman[257907]: 2025-12-03 21:38:55.90833418 +0000 UTC m=+0.039096141 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:38:56 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:38:56 np0005544708 podman[257907]: 2025-12-03 21:38:56.044205538 +0000 UTC m=+0.174967529 container init 0bef449bc320516987948b257f863cc6d5d2d4c05cb0d4b67f3be0aed8ef4a20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hertz, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:38:56 np0005544708 podman[257907]: 2025-12-03 21:38:56.061222995 +0000 UTC m=+0.191984936 container start 0bef449bc320516987948b257f863cc6d5d2d4c05cb0d4b67f3be0aed8ef4a20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec  3 16:38:56 np0005544708 podman[257907]: 2025-12-03 21:38:56.065701286 +0000 UTC m=+0.196463307 container attach 0bef449bc320516987948b257f863cc6d5d2d4c05cb0d4b67f3be0aed8ef4a20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:38:56 np0005544708 reverent_hertz[257923]: 167 167
Dec  3 16:38:56 np0005544708 systemd[1]: libpod-0bef449bc320516987948b257f863cc6d5d2d4c05cb0d4b67f3be0aed8ef4a20.scope: Deactivated successfully.
Dec  3 16:38:56 np0005544708 podman[257907]: 2025-12-03 21:38:56.076420993 +0000 UTC m=+0.207182944 container died 0bef449bc320516987948b257f863cc6d5d2d4c05cb0d4b67f3be0aed8ef4a20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hertz, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec  3 16:38:56 np0005544708 systemd[1]: var-lib-containers-storage-overlay-ed72e68c72f1aa9b7e9316c26a005d221199c1168a428ada2e2fe36fd3269950-merged.mount: Deactivated successfully.
Dec  3 16:38:56 np0005544708 podman[257907]: 2025-12-03 21:38:56.126851217 +0000 UTC m=+0.257613128 container remove 0bef449bc320516987948b257f863cc6d5d2d4c05cb0d4b67f3be0aed8ef4a20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hertz, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:38:56 np0005544708 systemd[1]: libpod-conmon-0bef449bc320516987948b257f863cc6d5d2d4c05cb0d4b67f3be0aed8ef4a20.scope: Deactivated successfully.
Dec  3 16:38:56 np0005544708 podman[257945]: 2025-12-03 21:38:56.369910153 +0000 UTC m=+0.069273661 container create 58dddf7247e7c4a6e294d4dc19233c8bc1cfc8ce8e0ac1cf4f6d1e460ca9193d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_chandrasekhar, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 16:38:56 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:38:56 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:38:56 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:38:56 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:38:56 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:38:56 np0005544708 systemd[1]: Started libpod-conmon-58dddf7247e7c4a6e294d4dc19233c8bc1cfc8ce8e0ac1cf4f6d1e460ca9193d.scope.
Dec  3 16:38:56 np0005544708 podman[257945]: 2025-12-03 21:38:56.340874364 +0000 UTC m=+0.040237912 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:38:56 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:38:56 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbbcfd02a2f384bb37632d0a7b4ade61aa1e0e99447a0146eecd81d330e1c6aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:38:56 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbbcfd02a2f384bb37632d0a7b4ade61aa1e0e99447a0146eecd81d330e1c6aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:38:56 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbbcfd02a2f384bb37632d0a7b4ade61aa1e0e99447a0146eecd81d330e1c6aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:38:56 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbbcfd02a2f384bb37632d0a7b4ade61aa1e0e99447a0146eecd81d330e1c6aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:38:56 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbbcfd02a2f384bb37632d0a7b4ade61aa1e0e99447a0146eecd81d330e1c6aa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:38:56 np0005544708 podman[257945]: 2025-12-03 21:38:56.477453551 +0000 UTC m=+0.176817099 container init 58dddf7247e7c4a6e294d4dc19233c8bc1cfc8ce8e0ac1cf4f6d1e460ca9193d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_chandrasekhar, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  3 16:38:56 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1018: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:38:56 np0005544708 podman[257945]: 2025-12-03 21:38:56.489436752 +0000 UTC m=+0.188800250 container start 58dddf7247e7c4a6e294d4dc19233c8bc1cfc8ce8e0ac1cf4f6d1e460ca9193d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_chandrasekhar, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Dec  3 16:38:56 np0005544708 podman[257945]: 2025-12-03 21:38:56.495835054 +0000 UTC m=+0.195198562 container attach 58dddf7247e7c4a6e294d4dc19233c8bc1cfc8ce8e0ac1cf4f6d1e460ca9193d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_chandrasekhar, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  3 16:38:56 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:38:57 np0005544708 condescending_chandrasekhar[257961]: --> passed data devices: 0 physical, 3 LVM
Dec  3 16:38:57 np0005544708 condescending_chandrasekhar[257961]: --> All data devices are unavailable
Dec  3 16:38:57 np0005544708 systemd[1]: libpod-58dddf7247e7c4a6e294d4dc19233c8bc1cfc8ce8e0ac1cf4f6d1e460ca9193d.scope: Deactivated successfully.
Dec  3 16:38:57 np0005544708 podman[257945]: 2025-12-03 21:38:57.136732752 +0000 UTC m=+0.836096270 container died 58dddf7247e7c4a6e294d4dc19233c8bc1cfc8ce8e0ac1cf4f6d1e460ca9193d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:38:57 np0005544708 systemd[1]: var-lib-containers-storage-overlay-bbbcfd02a2f384bb37632d0a7b4ade61aa1e0e99447a0146eecd81d330e1c6aa-merged.mount: Deactivated successfully.
Dec  3 16:38:57 np0005544708 podman[257945]: 2025-12-03 21:38:57.19775363 +0000 UTC m=+0.897117138 container remove 58dddf7247e7c4a6e294d4dc19233c8bc1cfc8ce8e0ac1cf4f6d1e460ca9193d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_chandrasekhar, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:38:57 np0005544708 systemd[1]: libpod-conmon-58dddf7247e7c4a6e294d4dc19233c8bc1cfc8ce8e0ac1cf4f6d1e460ca9193d.scope: Deactivated successfully.
Dec  3 16:38:57 np0005544708 podman[258057]: 2025-12-03 21:38:57.826126261 +0000 UTC m=+0.074293196 container create 068f3106c10b4ba9dee95ad363130e2c88afd1a5fac2a7427ac23edc98522efa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_borg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec  3 16:38:57 np0005544708 systemd[1]: Started libpod-conmon-068f3106c10b4ba9dee95ad363130e2c88afd1a5fac2a7427ac23edc98522efa.scope.
Dec  3 16:38:57 np0005544708 podman[258057]: 2025-12-03 21:38:57.797895813 +0000 UTC m=+0.046062768 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:38:57 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:38:57 np0005544708 podman[258057]: 2025-12-03 21:38:57.906441598 +0000 UTC m=+0.154608603 container init 068f3106c10b4ba9dee95ad363130e2c88afd1a5fac2a7427ac23edc98522efa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_borg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 16:38:57 np0005544708 podman[258057]: 2025-12-03 21:38:57.913667941 +0000 UTC m=+0.161834886 container start 068f3106c10b4ba9dee95ad363130e2c88afd1a5fac2a7427ac23edc98522efa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_borg, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:38:57 np0005544708 podman[258057]: 2025-12-03 21:38:57.918268075 +0000 UTC m=+0.166435070 container attach 068f3106c10b4ba9dee95ad363130e2c88afd1a5fac2a7427ac23edc98522efa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_borg, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:38:57 np0005544708 peaceful_borg[258074]: 167 167
Dec  3 16:38:57 np0005544708 systemd[1]: libpod-068f3106c10b4ba9dee95ad363130e2c88afd1a5fac2a7427ac23edc98522efa.scope: Deactivated successfully.
Dec  3 16:38:57 np0005544708 podman[258057]: 2025-12-03 21:38:57.920610488 +0000 UTC m=+0.168777443 container died 068f3106c10b4ba9dee95ad363130e2c88afd1a5fac2a7427ac23edc98522efa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_borg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec  3 16:38:57 np0005544708 systemd[1]: var-lib-containers-storage-overlay-64aacfb9dd8213ee57025e4c38f86d1d7ae313697e6c0f485a17313c2d2aaa56-merged.mount: Deactivated successfully.
Dec  3 16:38:57 np0005544708 podman[258057]: 2025-12-03 21:38:57.970153898 +0000 UTC m=+0.218320843 container remove 068f3106c10b4ba9dee95ad363130e2c88afd1a5fac2a7427ac23edc98522efa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_borg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:38:57 np0005544708 systemd[1]: libpod-conmon-068f3106c10b4ba9dee95ad363130e2c88afd1a5fac2a7427ac23edc98522efa.scope: Deactivated successfully.
Dec  3 16:38:58 np0005544708 podman[258096]: 2025-12-03 21:38:58.236269353 +0000 UTC m=+0.073890755 container create fb59cd30b09191e491cb68555bb53b97cc49a6420e04f281abd9667ab72ba2f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  3 16:38:58 np0005544708 podman[258096]: 2025-12-03 21:38:58.206661268 +0000 UTC m=+0.044282730 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:38:58 np0005544708 systemd[1]: Started libpod-conmon-fb59cd30b09191e491cb68555bb53b97cc49a6420e04f281abd9667ab72ba2f6.scope.
Dec  3 16:38:58 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:38:58 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/748b8002f515684ae9c2386e688941354e548bc84cfc86bb0410313373e1f9c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:38:58 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/748b8002f515684ae9c2386e688941354e548bc84cfc86bb0410313373e1f9c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:38:58 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/748b8002f515684ae9c2386e688941354e548bc84cfc86bb0410313373e1f9c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:38:58 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/748b8002f515684ae9c2386e688941354e548bc84cfc86bb0410313373e1f9c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:38:58 np0005544708 podman[258096]: 2025-12-03 21:38:58.382858819 +0000 UTC m=+0.220480281 container init fb59cd30b09191e491cb68555bb53b97cc49a6420e04f281abd9667ab72ba2f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Dec  3 16:38:58 np0005544708 podman[258096]: 2025-12-03 21:38:58.395348334 +0000 UTC m=+0.232969736 container start fb59cd30b09191e491cb68555bb53b97cc49a6420e04f281abd9667ab72ba2f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:38:58 np0005544708 podman[258096]: 2025-12-03 21:38:58.399490296 +0000 UTC m=+0.237111688 container attach fb59cd30b09191e491cb68555bb53b97cc49a6420e04f281abd9667ab72ba2f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:38:58 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1019: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:38:58 np0005544708 charming_euler[258112]: {
Dec  3 16:38:58 np0005544708 charming_euler[258112]:    "0": [
Dec  3 16:38:58 np0005544708 charming_euler[258112]:        {
Dec  3 16:38:58 np0005544708 charming_euler[258112]:            "devices": [
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "/dev/loop3"
Dec  3 16:38:58 np0005544708 charming_euler[258112]:            ],
Dec  3 16:38:58 np0005544708 charming_euler[258112]:            "lv_name": "ceph_lv0",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:            "lv_size": "21470642176",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d33bc95-baf8-481d-bc78-3b15ffd29872,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:            "lv_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:            "name": "ceph_lv0",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:            "tags": {
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.block_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.cluster_name": "ceph",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.crush_device_class": "",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.encrypted": "0",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.objectstore": "bluestore",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.osd_fsid": "4d33bc95-baf8-481d-bc78-3b15ffd29872",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.osd_id": "0",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.type": "block",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.vdo": "0",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.with_tpm": "0"
Dec  3 16:38:58 np0005544708 charming_euler[258112]:            },
Dec  3 16:38:58 np0005544708 charming_euler[258112]:            "type": "block",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:            "vg_name": "ceph_vg0"
Dec  3 16:38:58 np0005544708 charming_euler[258112]:        }
Dec  3 16:38:58 np0005544708 charming_euler[258112]:    ],
Dec  3 16:38:58 np0005544708 charming_euler[258112]:    "1": [
Dec  3 16:38:58 np0005544708 charming_euler[258112]:        {
Dec  3 16:38:58 np0005544708 charming_euler[258112]:            "devices": [
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "/dev/loop4"
Dec  3 16:38:58 np0005544708 charming_euler[258112]:            ],
Dec  3 16:38:58 np0005544708 charming_euler[258112]:            "lv_name": "ceph_lv1",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:            "lv_size": "21470642176",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c4086f1b-ff53-4e63-8dc0-011238d77976,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:            "lv_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:            "name": "ceph_lv1",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:            "tags": {
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.block_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.cluster_name": "ceph",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.crush_device_class": "",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.encrypted": "0",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.objectstore": "bluestore",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.osd_fsid": "c4086f1b-ff53-4e63-8dc0-011238d77976",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.osd_id": "1",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.type": "block",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.vdo": "0",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.with_tpm": "0"
Dec  3 16:38:58 np0005544708 charming_euler[258112]:            },
Dec  3 16:38:58 np0005544708 charming_euler[258112]:            "type": "block",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:            "vg_name": "ceph_vg1"
Dec  3 16:38:58 np0005544708 charming_euler[258112]:        }
Dec  3 16:38:58 np0005544708 charming_euler[258112]:    ],
Dec  3 16:38:58 np0005544708 charming_euler[258112]:    "2": [
Dec  3 16:38:58 np0005544708 charming_euler[258112]:        {
Dec  3 16:38:58 np0005544708 charming_euler[258112]:            "devices": [
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "/dev/loop5"
Dec  3 16:38:58 np0005544708 charming_euler[258112]:            ],
Dec  3 16:38:58 np0005544708 charming_euler[258112]:            "lv_name": "ceph_lv2",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:            "lv_size": "21470642176",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=abcd6a67-9013-4470-978f-f75da5f33cd4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:            "lv_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:            "name": "ceph_lv2",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:            "tags": {
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.block_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.cluster_name": "ceph",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.crush_device_class": "",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.encrypted": "0",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.objectstore": "bluestore",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.osd_fsid": "abcd6a67-9013-4470-978f-f75da5f33cd4",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.osd_id": "2",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.type": "block",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.vdo": "0",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:                "ceph.with_tpm": "0"
Dec  3 16:38:58 np0005544708 charming_euler[258112]:            },
Dec  3 16:38:58 np0005544708 charming_euler[258112]:            "type": "block",
Dec  3 16:38:58 np0005544708 charming_euler[258112]:            "vg_name": "ceph_vg2"
Dec  3 16:38:58 np0005544708 charming_euler[258112]:        }
Dec  3 16:38:58 np0005544708 charming_euler[258112]:    ]
Dec  3 16:38:58 np0005544708 charming_euler[258112]: }
Dec  3 16:38:58 np0005544708 systemd[1]: libpod-fb59cd30b09191e491cb68555bb53b97cc49a6420e04f281abd9667ab72ba2f6.scope: Deactivated successfully.
Dec  3 16:38:58 np0005544708 podman[258096]: 2025-12-03 21:38:58.720271248 +0000 UTC m=+0.557892620 container died fb59cd30b09191e491cb68555bb53b97cc49a6420e04f281abd9667ab72ba2f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec  3 16:38:58 np0005544708 systemd[1]: var-lib-containers-storage-overlay-748b8002f515684ae9c2386e688941354e548bc84cfc86bb0410313373e1f9c1-merged.mount: Deactivated successfully.
Dec  3 16:38:58 np0005544708 podman[258096]: 2025-12-03 21:38:58.777242427 +0000 UTC m=+0.614863829 container remove fb59cd30b09191e491cb68555bb53b97cc49a6420e04f281abd9667ab72ba2f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 16:38:58 np0005544708 systemd[1]: libpod-conmon-fb59cd30b09191e491cb68555bb53b97cc49a6420e04f281abd9667ab72ba2f6.scope: Deactivated successfully.
Dec  3 16:38:59 np0005544708 podman[258193]: 2025-12-03 21:38:59.350451108 +0000 UTC m=+0.058393749 container create 48acbd99009f3aeb1b0354d8a08e19f17cbf94ea6051ca69ac6df5fe57e9e48d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_shaw, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:38:59 np0005544708 systemd[1]: Started libpod-conmon-48acbd99009f3aeb1b0354d8a08e19f17cbf94ea6051ca69ac6df5fe57e9e48d.scope.
Dec  3 16:38:59 np0005544708 podman[258193]: 2025-12-03 21:38:59.323698 +0000 UTC m=+0.031640701 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:38:59 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:38:59 np0005544708 podman[258193]: 2025-12-03 21:38:59.441323588 +0000 UTC m=+0.149266219 container init 48acbd99009f3aeb1b0354d8a08e19f17cbf94ea6051ca69ac6df5fe57e9e48d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_shaw, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec  3 16:38:59 np0005544708 podman[258193]: 2025-12-03 21:38:59.451771848 +0000 UTC m=+0.159714459 container start 48acbd99009f3aeb1b0354d8a08e19f17cbf94ea6051ca69ac6df5fe57e9e48d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_shaw, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:38:59 np0005544708 podman[258193]: 2025-12-03 21:38:59.455719385 +0000 UTC m=+0.163662016 container attach 48acbd99009f3aeb1b0354d8a08e19f17cbf94ea6051ca69ac6df5fe57e9e48d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_shaw, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec  3 16:38:59 np0005544708 happy_shaw[258210]: 167 167
Dec  3 16:38:59 np0005544708 systemd[1]: libpod-48acbd99009f3aeb1b0354d8a08e19f17cbf94ea6051ca69ac6df5fe57e9e48d.scope: Deactivated successfully.
Dec  3 16:38:59 np0005544708 podman[258193]: 2025-12-03 21:38:59.45890941 +0000 UTC m=+0.166852051 container died 48acbd99009f3aeb1b0354d8a08e19f17cbf94ea6051ca69ac6df5fe57e9e48d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_shaw, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec  3 16:38:59 np0005544708 systemd[1]: var-lib-containers-storage-overlay-473f79dd126a0b61cdea17267b14308abf7df11add406a09d1a2816e49fd19d5-merged.mount: Deactivated successfully.
Dec  3 16:38:59 np0005544708 podman[258193]: 2025-12-03 21:38:59.510476734 +0000 UTC m=+0.218419365 container remove 48acbd99009f3aeb1b0354d8a08e19f17cbf94ea6051ca69ac6df5fe57e9e48d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_shaw, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:38:59 np0005544708 systemd[1]: libpod-conmon-48acbd99009f3aeb1b0354d8a08e19f17cbf94ea6051ca69ac6df5fe57e9e48d.scope: Deactivated successfully.
Dec  3 16:38:59 np0005544708 podman[258232]: 2025-12-03 21:38:59.71142042 +0000 UTC m=+0.067574426 container create 5ecdff2123c0f11fbbc63c6200ace8b759bede935808a098163abd94027d6ffc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mendeleev, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec  3 16:38:59 np0005544708 podman[258232]: 2025-12-03 21:38:59.684229039 +0000 UTC m=+0.040383155 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:38:59 np0005544708 systemd[1]: Started libpod-conmon-5ecdff2123c0f11fbbc63c6200ace8b759bede935808a098163abd94027d6ffc.scope.
Dec  3 16:38:59 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:38:59 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aefc3d05ec477f865a1bd33363177a77ad3925a164d72a72a037659a3d4ad008/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:38:59 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aefc3d05ec477f865a1bd33363177a77ad3925a164d72a72a037659a3d4ad008/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:38:59 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aefc3d05ec477f865a1bd33363177a77ad3925a164d72a72a037659a3d4ad008/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:38:59 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aefc3d05ec477f865a1bd33363177a77ad3925a164d72a72a037659a3d4ad008/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:38:59 np0005544708 podman[258232]: 2025-12-03 21:38:59.86004814 +0000 UTC m=+0.216202226 container init 5ecdff2123c0f11fbbc63c6200ace8b759bede935808a098163abd94027d6ffc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:38:59 np0005544708 podman[258232]: 2025-12-03 21:38:59.869481763 +0000 UTC m=+0.225635799 container start 5ecdff2123c0f11fbbc63c6200ace8b759bede935808a098163abd94027d6ffc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mendeleev, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec  3 16:38:59 np0005544708 podman[258232]: 2025-12-03 21:38:59.874473567 +0000 UTC m=+0.230627653 container attach 5ecdff2123c0f11fbbc63c6200ace8b759bede935808a098163abd94027d6ffc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:38:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  3 16:38:59 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3829623944' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec  3 16:38:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  3 16:38:59 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3829623944' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec  3 16:39:00 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1020: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:39:00 np0005544708 lvm[258328]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:39:00 np0005544708 lvm[258327]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:39:00 np0005544708 lvm[258330]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:39:00 np0005544708 lvm[258328]: VG ceph_vg1 finished
Dec  3 16:39:00 np0005544708 lvm[258327]: VG ceph_vg0 finished
Dec  3 16:39:00 np0005544708 lvm[258330]: VG ceph_vg2 finished
Dec  3 16:39:00 np0005544708 lvm[258332]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:39:00 np0005544708 lvm[258332]: VG ceph_vg2 finished
Dec  3 16:39:00 np0005544708 lucid_mendeleev[258249]: {}
Dec  3 16:39:00 np0005544708 systemd[1]: libpod-5ecdff2123c0f11fbbc63c6200ace8b759bede935808a098163abd94027d6ffc.scope: Deactivated successfully.
Dec  3 16:39:00 np0005544708 systemd[1]: libpod-5ecdff2123c0f11fbbc63c6200ace8b759bede935808a098163abd94027d6ffc.scope: Consumed 1.387s CPU time.
Dec  3 16:39:00 np0005544708 podman[258232]: 2025-12-03 21:39:00.766500468 +0000 UTC m=+1.122654514 container died 5ecdff2123c0f11fbbc63c6200ace8b759bede935808a098163abd94027d6ffc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mendeleev, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:39:00 np0005544708 systemd[1]: var-lib-containers-storage-overlay-aefc3d05ec477f865a1bd33363177a77ad3925a164d72a72a037659a3d4ad008-merged.mount: Deactivated successfully.
Dec  3 16:39:00 np0005544708 podman[258232]: 2025-12-03 21:39:00.819907392 +0000 UTC m=+1.176061418 container remove 5ecdff2123c0f11fbbc63c6200ace8b759bede935808a098163abd94027d6ffc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec  3 16:39:00 np0005544708 systemd[1]: libpod-conmon-5ecdff2123c0f11fbbc63c6200ace8b759bede935808a098163abd94027d6ffc.scope: Deactivated successfully.
Dec  3 16:39:00 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:39:00 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:39:00 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:39:00 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:39:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:39:01 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:39:01 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:39:02 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1021: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:39:04 np0005544708 podman[258369]: 2025-12-03 21:39:04.209739206 +0000 UTC m=+0.144238824 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2)
Dec  3 16:39:04 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1022: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:39:06 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1023: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:39:06 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:39:08 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1024: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:39:09 np0005544708 ceph-osd[86059]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 16:39:09 np0005544708 ceph-osd[86059]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 5730 writes, 23K keys, 5730 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5730 writes, 1063 syncs, 5.39 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1357 writes, 3840 keys, 1357 commit groups, 1.0 writes per commit group, ingest: 2.14 MB, 0.00 MB/s#012Interval WAL: 1357 writes, 612 syncs, 2.22 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 16:39:10 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1025: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:39:11 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:39:12 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1026: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:39:14 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1027: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:39:14 np0005544708 ceph-osd[87094]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 16:39:14 np0005544708 ceph-osd[87094]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 6189 writes, 25K keys, 6189 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6189 writes, 1252 syncs, 4.94 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1674 writes, 4716 keys, 1674 commit groups, 1.0 writes per commit group, ingest: 2.71 MB, 0.00 MB/s#012Interval WAL: 1674 writes, 747 syncs, 2.24 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 16:39:16 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1028: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:39:16 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:39:18 np0005544708 podman[258397]: 2025-12-03 21:39:18.162832883 +0000 UTC m=+0.086547363 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  3 16:39:18 np0005544708 podman[258396]: 2025-12-03 21:39:18.169743869 +0000 UTC m=+0.099629526 container health_status 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 16:39:18 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1029: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:39:19 np0005544708 ceph-osd[88129]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 16:39:19 np0005544708 ceph-osd[88129]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 5828 writes, 23K keys, 5828 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5828 writes, 1121 syncs, 5.20 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1678 writes, 4312 keys, 1678 commit groups, 1.0 writes per commit group, ingest: 2.35 MB, 0.00 MB/s#012Interval WAL: 1678 writes, 755 syncs, 2.22 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 16:39:20 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1030: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:39:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Optimize plan auto_2025-12-03_21:39:21
Dec  3 16:39:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 16:39:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] do_upmap
Dec  3 16:39:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'backups', 'volumes', '.mgr', 'vms']
Dec  3 16:39:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] prepared 0/10 upmap changes
Dec  3 16:39:21 np0005544708 ceph-mgr[75500]: [devicehealth INFO root] Check health
Dec  3 16:39:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:39:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:39:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:39:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:39:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:39:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:39:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:39:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 16:39:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:39:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 16:39:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:39:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:39:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:39:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:39:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:39:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:39:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:39:22 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1031: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:39:24 np0005544708 nova_compute[241566]: 2025-12-03 21:39:24.102 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:39:24 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1032: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:39:24 np0005544708 nova_compute[241566]: 2025-12-03 21:39:24.551 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:39:24 np0005544708 nova_compute[241566]: 2025-12-03 21:39:24.552 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 16:39:24 np0005544708 nova_compute[241566]: 2025-12-03 21:39:24.552 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 16:39:24 np0005544708 nova_compute[241566]: 2025-12-03 21:39:24.572 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 16:39:25 np0005544708 nova_compute[241566]: 2025-12-03 21:39:25.551 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:39:25 np0005544708 nova_compute[241566]: 2025-12-03 21:39:25.551 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:39:26 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1033: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:39:26 np0005544708 nova_compute[241566]: 2025-12-03 21:39:26.552 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:39:26 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:39:27 np0005544708 nova_compute[241566]: 2025-12-03 21:39:27.551 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:39:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 16:39:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:39:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 16:39:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:39:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 3.533351853729544e-07 of space, bias 1.0, pg target 0.00010600055561188632 quantized to 32 (current 32)
Dec  3 16:39:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:39:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.833582600959152e-08 of space, bias 1.0, pg target 2.9500747802877454e-05 quantized to 32 (current 32)
Dec  3 16:39:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:39:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:39:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:39:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006678894471709225 of space, bias 1.0, pg target 0.20036683415127676 quantized to 32 (current 32)
Dec  3 16:39:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:39:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.953496000112683e-07 of space, bias 4.0, pg target 0.0009544195200135219 quantized to 16 (current 16)
Dec  3 16:39:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:39:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:39:28 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1034: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:39:28 np0005544708 nova_compute[241566]: 2025-12-03 21:39:28.551 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:39:28 np0005544708 nova_compute[241566]: 2025-12-03 21:39:28.552 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:39:28 np0005544708 nova_compute[241566]: 2025-12-03 21:39:28.552 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 16:39:30 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1035: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:39:30 np0005544708 nova_compute[241566]: 2025-12-03 21:39:30.551 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:39:30 np0005544708 nova_compute[241566]: 2025-12-03 21:39:30.593 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:39:30 np0005544708 nova_compute[241566]: 2025-12-03 21:39:30.593 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:39:30 np0005544708 nova_compute[241566]: 2025-12-03 21:39:30.594 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:39:30 np0005544708 nova_compute[241566]: 2025-12-03 21:39:30.594 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 16:39:30 np0005544708 nova_compute[241566]: 2025-12-03 21:39:30.595 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:39:31 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  3 16:39:31 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2628005888' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  3 16:39:31 np0005544708 nova_compute[241566]: 2025-12-03 21:39:31.185 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:39:31 np0005544708 nova_compute[241566]: 2025-12-03 21:39:31.459 241570 WARNING nova.virt.libvirt.driver [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 16:39:31 np0005544708 nova_compute[241566]: 2025-12-03 21:39:31.461 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5132MB free_disk=59.988260054029524GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 16:39:31 np0005544708 nova_compute[241566]: 2025-12-03 21:39:31.461 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:39:31 np0005544708 nova_compute[241566]: 2025-12-03 21:39:31.462 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:39:31 np0005544708 nova_compute[241566]: 2025-12-03 21:39:31.551 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 16:39:31 np0005544708 nova_compute[241566]: 2025-12-03 21:39:31.551 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 16:39:31 np0005544708 nova_compute[241566]: 2025-12-03 21:39:31.580 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:39:31 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:39:32 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  3 16:39:32 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3593444517' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  3 16:39:32 np0005544708 nova_compute[241566]: 2025-12-03 21:39:32.156 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:39:32 np0005544708 nova_compute[241566]: 2025-12-03 21:39:32.166 241570 DEBUG nova.compute.provider_tree [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Inventory has not changed in ProviderTree for provider: 94aba67c-5c5e-45d0-83d1-33eb467c8775 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 16:39:32 np0005544708 nova_compute[241566]: 2025-12-03 21:39:32.191 241570 DEBUG nova.scheduler.client.report [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Inventory has not changed for provider 94aba67c-5c5e-45d0-83d1-33eb467c8775 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 16:39:32 np0005544708 nova_compute[241566]: 2025-12-03 21:39:32.194 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 16:39:32 np0005544708 nova_compute[241566]: 2025-12-03 21:39:32.194 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:39:32 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1036: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:39:34 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1037: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:39:35 np0005544708 podman[258477]: 2025-12-03 21:39:35.250405991 +0000 UTC m=+0.182574942 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  3 16:39:36 np0005544708 nova_compute[241566]: 2025-12-03 21:39:36.189 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:39:36 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1038: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:39:36 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:39:38 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1039: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:39:40 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1040: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:39:41 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:39:42 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1041: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:39:44 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1042: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:39:46 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1043: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:39:46 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:39:48 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1044: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:39:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:39:48.944 151937 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:39:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:39:48.945 151937 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:39:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:39:48.945 151937 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:39:49 np0005544708 podman[258504]: 2025-12-03 21:39:49.159842528 +0000 UTC m=+0.088026524 container health_status 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:39:49 np0005544708 podman[258505]: 2025-12-03 21:39:49.184093399 +0000 UTC m=+0.083408061 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  3 16:39:50 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1045: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:39:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:39:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:39:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:39:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:39:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:39:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:39:51 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:39:52 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1046: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:39:54 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1047: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:39:56 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1048: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:39:56 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:39:58 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1049: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:39:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  3 16:39:59 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3985118692' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec  3 16:39:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  3 16:39:59 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3985118692' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec  3 16:40:00 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1050: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:40:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:40:01 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:40:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:40:01 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:40:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:40:02 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:40:02 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:40:02 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1051: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:40:03 np0005544708 podman[258756]: 2025-12-03 21:40:03.026131797 +0000 UTC m=+0.067355460 container create 8b806e6254b602ed1ed44ee462e8e1fe6937d61f6c9e3acbe757f8c172c84e9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec  3 16:40:03 np0005544708 systemd[1]: Started libpod-conmon-8b806e6254b602ed1ed44ee462e8e1fe6937d61f6c9e3acbe757f8c172c84e9d.scope.
Dec  3 16:40:03 np0005544708 podman[258756]: 2025-12-03 21:40:02.997145419 +0000 UTC m=+0.038369092 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:40:03 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:40:03 np0005544708 podman[258756]: 2025-12-03 21:40:03.129646486 +0000 UTC m=+0.170870169 container init 8b806e6254b602ed1ed44ee462e8e1fe6937d61f6c9e3acbe757f8c172c84e9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_babbage, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec  3 16:40:03 np0005544708 podman[258756]: 2025-12-03 21:40:03.141006191 +0000 UTC m=+0.182229864 container start 8b806e6254b602ed1ed44ee462e8e1fe6937d61f6c9e3acbe757f8c172c84e9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec  3 16:40:03 np0005544708 podman[258756]: 2025-12-03 21:40:03.145151613 +0000 UTC m=+0.186375346 container attach 8b806e6254b602ed1ed44ee462e8e1fe6937d61f6c9e3acbe757f8c172c84e9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 16:40:03 np0005544708 confident_babbage[258772]: 167 167
Dec  3 16:40:03 np0005544708 systemd[1]: libpod-8b806e6254b602ed1ed44ee462e8e1fe6937d61f6c9e3acbe757f8c172c84e9d.scope: Deactivated successfully.
Dec  3 16:40:03 np0005544708 podman[258756]: 2025-12-03 21:40:03.149598531 +0000 UTC m=+0.190822194 container died 8b806e6254b602ed1ed44ee462e8e1fe6937d61f6c9e3acbe757f8c172c84e9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  3 16:40:03 np0005544708 systemd[1]: var-lib-containers-storage-overlay-8e071b05d4d70a31427d70d56c008b4e5c1d060f361cf3f86565be6a0fe38202-merged.mount: Deactivated successfully.
Dec  3 16:40:03 np0005544708 podman[258756]: 2025-12-03 21:40:03.207349242 +0000 UTC m=+0.248572915 container remove 8b806e6254b602ed1ed44ee462e8e1fe6937d61f6c9e3acbe757f8c172c84e9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_babbage, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:40:03 np0005544708 systemd[1]: libpod-conmon-8b806e6254b602ed1ed44ee462e8e1fe6937d61f6c9e3acbe757f8c172c84e9d.scope: Deactivated successfully.
Dec  3 16:40:03 np0005544708 podman[258798]: 2025-12-03 21:40:03.451244171 +0000 UTC m=+0.065958183 container create f3fcf45952b32c8f596ff0205a7bf3f745312f36d127e26a8fa64cf2f67c4c09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_wiles, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:40:03 np0005544708 systemd[1]: Started libpod-conmon-f3fcf45952b32c8f596ff0205a7bf3f745312f36d127e26a8fa64cf2f67c4c09.scope.
Dec  3 16:40:03 np0005544708 podman[258798]: 2025-12-03 21:40:03.429052995 +0000 UTC m=+0.043766987 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:40:03 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:40:03 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/765ab9807ebabbcaee1f29d5db33f2310264c8e5bced45f2725ddec9fdbe6971/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:40:03 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/765ab9807ebabbcaee1f29d5db33f2310264c8e5bced45f2725ddec9fdbe6971/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:40:03 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/765ab9807ebabbcaee1f29d5db33f2310264c8e5bced45f2725ddec9fdbe6971/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:40:03 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/765ab9807ebabbcaee1f29d5db33f2310264c8e5bced45f2725ddec9fdbe6971/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:40:03 np0005544708 podman[258798]: 2025-12-03 21:40:03.567505462 +0000 UTC m=+0.182219524 container init f3fcf45952b32c8f596ff0205a7bf3f745312f36d127e26a8fa64cf2f67c4c09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_wiles, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec  3 16:40:03 np0005544708 podman[258798]: 2025-12-03 21:40:03.579087624 +0000 UTC m=+0.193801636 container start f3fcf45952b32c8f596ff0205a7bf3f745312f36d127e26a8fa64cf2f67c4c09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_wiles, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 16:40:03 np0005544708 podman[258798]: 2025-12-03 21:40:03.583527752 +0000 UTC m=+0.198241784 container attach f3fcf45952b32c8f596ff0205a7bf3f745312f36d127e26a8fa64cf2f67c4c09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_wiles, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]: [
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]:    {
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]:        "available": false,
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]:        "being_replaced": false,
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]:        "ceph_device_lvm": false,
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]:        "lsm_data": {},
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]:        "lvs": [],
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]:        "path": "/dev/sr0",
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]:        "rejected_reasons": [
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]:            "Insufficient space (<5GB)",
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]:            "Has a FileSystem"
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]:        ],
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]:        "sys_api": {
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]:            "actuators": null,
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]:            "device_nodes": [
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]:                "sr0"
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]:            ],
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]:            "devname": "sr0",
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]:            "human_readable_size": "482.00 KB",
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]:            "id_bus": "ata",
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]:            "model": "QEMU DVD-ROM",
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]:            "nr_requests": "2",
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]:            "parent": "/dev/sr0",
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]:            "partitions": {},
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]:            "path": "/dev/sr0",
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]:            "removable": "1",
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]:            "rev": "2.5+",
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]:            "ro": "0",
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]:            "rotational": "1",
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]:            "sas_address": "",
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]:            "sas_device_handle": "",
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]:            "scheduler_mode": "mq-deadline",
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]:            "sectors": 0,
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]:            "sectorsize": "2048",
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]:            "size": 493568.0,
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]:            "support_discard": "2048",
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]:            "type": "disk",
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]:            "vendor": "QEMU"
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]:        }
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]:    }
Dec  3 16:40:04 np0005544708 flamboyant_wiles[258814]: ]
Dec  3 16:40:04 np0005544708 systemd[1]: libpod-f3fcf45952b32c8f596ff0205a7bf3f745312f36d127e26a8fa64cf2f67c4c09.scope: Deactivated successfully.
Dec  3 16:40:04 np0005544708 podman[258798]: 2025-12-03 21:40:04.156508717 +0000 UTC m=+0.771222719 container died f3fcf45952b32c8f596ff0205a7bf3f745312f36d127e26a8fa64cf2f67c4c09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_wiles, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec  3 16:40:04 np0005544708 systemd[1]: var-lib-containers-storage-overlay-765ab9807ebabbcaee1f29d5db33f2310264c8e5bced45f2725ddec9fdbe6971-merged.mount: Deactivated successfully.
Dec  3 16:40:04 np0005544708 podman[258798]: 2025-12-03 21:40:04.213392484 +0000 UTC m=+0.828106456 container remove f3fcf45952b32c8f596ff0205a7bf3f745312f36d127e26a8fa64cf2f67c4c09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_wiles, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:40:04 np0005544708 systemd[1]: libpod-conmon-f3fcf45952b32c8f596ff0205a7bf3f745312f36d127e26a8fa64cf2f67c4c09.scope: Deactivated successfully.
Dec  3 16:40:04 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:40:04 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:40:04 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:40:04 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:40:04 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec  3 16:40:04 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec  3 16:40:04 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:40:04 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:40:04 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  3 16:40:04 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:40:04 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  3 16:40:04 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:40:04 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  3 16:40:04 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  3 16:40:04 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  3 16:40:04 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:40:04 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:40:04 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:40:04 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1052: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:40:04 np0005544708 podman[259628]: 2025-12-03 21:40:04.871400331 +0000 UTC m=+0.076381702 container create 9420cc8ecae1361acc3e437a963e91bbad8a539082d853538b87db9838b0025b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_dewdney, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:40:04 np0005544708 systemd[1]: Started libpod-conmon-9420cc8ecae1361acc3e437a963e91bbad8a539082d853538b87db9838b0025b.scope.
Dec  3 16:40:04 np0005544708 podman[259628]: 2025-12-03 21:40:04.841125158 +0000 UTC m=+0.046106569 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:40:04 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:40:04 np0005544708 podman[259628]: 2025-12-03 21:40:04.969454533 +0000 UTC m=+0.174435924 container init 9420cc8ecae1361acc3e437a963e91bbad8a539082d853538b87db9838b0025b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 16:40:04 np0005544708 podman[259628]: 2025-12-03 21:40:04.979274306 +0000 UTC m=+0.184255677 container start 9420cc8ecae1361acc3e437a963e91bbad8a539082d853538b87db9838b0025b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  3 16:40:04 np0005544708 podman[259628]: 2025-12-03 21:40:04.983286065 +0000 UTC m=+0.188267476 container attach 9420cc8ecae1361acc3e437a963e91bbad8a539082d853538b87db9838b0025b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_dewdney, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Dec  3 16:40:04 np0005544708 naughty_dewdney[259644]: 167 167
Dec  3 16:40:04 np0005544708 systemd[1]: libpod-9420cc8ecae1361acc3e437a963e91bbad8a539082d853538b87db9838b0025b.scope: Deactivated successfully.
Dec  3 16:40:04 np0005544708 podman[259628]: 2025-12-03 21:40:04.987274181 +0000 UTC m=+0.192255552 container died 9420cc8ecae1361acc3e437a963e91bbad8a539082d853538b87db9838b0025b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec  3 16:40:05 np0005544708 systemd[1]: var-lib-containers-storage-overlay-62adb8949b483ff01644bfaf07b2668235ddd45edf8b539ed5889e17422d9168-merged.mount: Deactivated successfully.
Dec  3 16:40:05 np0005544708 podman[259628]: 2025-12-03 21:40:05.043440839 +0000 UTC m=+0.248422200 container remove 9420cc8ecae1361acc3e437a963e91bbad8a539082d853538b87db9838b0025b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_dewdney, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:40:05 np0005544708 systemd[1]: libpod-conmon-9420cc8ecae1361acc3e437a963e91bbad8a539082d853538b87db9838b0025b.scope: Deactivated successfully.
Dec  3 16:40:05 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:40:05 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:40:05 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec  3 16:40:05 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:40:05 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:40:05 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:40:05 np0005544708 podman[259668]: 2025-12-03 21:40:05.302821564 +0000 UTC m=+0.070495684 container create 988d30ac7afefb91ecd6979e93fb7058afb9a79378a33c1274c2de7c42730e71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_jemison, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:40:05 np0005544708 systemd[1]: Started libpod-conmon-988d30ac7afefb91ecd6979e93fb7058afb9a79378a33c1274c2de7c42730e71.scope.
Dec  3 16:40:05 np0005544708 podman[259668]: 2025-12-03 21:40:05.272319435 +0000 UTC m=+0.039993595 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:40:05 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:40:05 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8023176697cee9a01a0d55c6369ff0235ca846eb11a41dd424f0768faabcfeae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:40:05 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8023176697cee9a01a0d55c6369ff0235ca846eb11a41dd424f0768faabcfeae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:40:05 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8023176697cee9a01a0d55c6369ff0235ca846eb11a41dd424f0768faabcfeae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:40:05 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8023176697cee9a01a0d55c6369ff0235ca846eb11a41dd424f0768faabcfeae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:40:05 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8023176697cee9a01a0d55c6369ff0235ca846eb11a41dd424f0768faabcfeae/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:40:05 np0005544708 podman[259668]: 2025-12-03 21:40:05.421603743 +0000 UTC m=+0.189277913 container init 988d30ac7afefb91ecd6979e93fb7058afb9a79378a33c1274c2de7c42730e71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_jemison, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:40:05 np0005544708 podman[259668]: 2025-12-03 21:40:05.437592122 +0000 UTC m=+0.205266232 container start 988d30ac7afefb91ecd6979e93fb7058afb9a79378a33c1274c2de7c42730e71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  3 16:40:05 np0005544708 podman[259668]: 2025-12-03 21:40:05.441993961 +0000 UTC m=+0.209668101 container attach 988d30ac7afefb91ecd6979e93fb7058afb9a79378a33c1274c2de7c42730e71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_jemison, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Dec  3 16:40:05 np0005544708 podman[259682]: 2025-12-03 21:40:05.559843675 +0000 UTC m=+0.211548041 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:40:05 np0005544708 peaceful_jemison[259685]: --> passed data devices: 0 physical, 3 LVM
Dec  3 16:40:05 np0005544708 peaceful_jemison[259685]: --> All data devices are unavailable
Dec  3 16:40:06 np0005544708 systemd[1]: libpod-988d30ac7afefb91ecd6979e93fb7058afb9a79378a33c1274c2de7c42730e71.scope: Deactivated successfully.
Dec  3 16:40:06 np0005544708 podman[259730]: 2025-12-03 21:40:06.08819991 +0000 UTC m=+0.041135144 container died 988d30ac7afefb91ecd6979e93fb7058afb9a79378a33c1274c2de7c42730e71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_jemison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  3 16:40:06 np0005544708 systemd[1]: var-lib-containers-storage-overlay-8023176697cee9a01a0d55c6369ff0235ca846eb11a41dd424f0768faabcfeae-merged.mount: Deactivated successfully.
Dec  3 16:40:06 np0005544708 podman[259730]: 2025-12-03 21:40:06.149035874 +0000 UTC m=+0.101971058 container remove 988d30ac7afefb91ecd6979e93fb7058afb9a79378a33c1274c2de7c42730e71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_jemison, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:40:06 np0005544708 systemd[1]: libpod-conmon-988d30ac7afefb91ecd6979e93fb7058afb9a79378a33c1274c2de7c42730e71.scope: Deactivated successfully.
Dec  3 16:40:06 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1053: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:40:06 np0005544708 podman[259809]: 2025-12-03 21:40:06.723395355 +0000 UTC m=+0.077214554 container create 6e92615f2c58e04fe85d5692ca3037893cded05f01f43a4c9ef3962bc74575a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_dhawan, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec  3 16:40:06 np0005544708 systemd[1]: Started libpod-conmon-6e92615f2c58e04fe85d5692ca3037893cded05f01f43a4c9ef3962bc74575a6.scope.
Dec  3 16:40:06 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:40:06 np0005544708 podman[259809]: 2025-12-03 21:40:06.702906545 +0000 UTC m=+0.056725774 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:40:06 np0005544708 podman[259809]: 2025-12-03 21:40:06.810866363 +0000 UTC m=+0.164685642 container init 6e92615f2c58e04fe85d5692ca3037893cded05f01f43a4c9ef3962bc74575a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  3 16:40:06 np0005544708 podman[259809]: 2025-12-03 21:40:06.822131516 +0000 UTC m=+0.175950745 container start 6e92615f2c58e04fe85d5692ca3037893cded05f01f43a4c9ef3962bc74575a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_dhawan, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec  3 16:40:06 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:40:06 np0005544708 podman[259809]: 2025-12-03 21:40:06.82675037 +0000 UTC m=+0.180569589 container attach 6e92615f2c58e04fe85d5692ca3037893cded05f01f43a4c9ef3962bc74575a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:40:06 np0005544708 jolly_dhawan[259825]: 167 167
Dec  3 16:40:06 np0005544708 systemd[1]: libpod-6e92615f2c58e04fe85d5692ca3037893cded05f01f43a4c9ef3962bc74575a6.scope: Deactivated successfully.
Dec  3 16:40:06 np0005544708 podman[259809]: 2025-12-03 21:40:06.831323222 +0000 UTC m=+0.185142451 container died 6e92615f2c58e04fe85d5692ca3037893cded05f01f43a4c9ef3962bc74575a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_dhawan, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:40:06 np0005544708 systemd[1]: var-lib-containers-storage-overlay-d2448b85bf2696f9563ea83ab1e15c47ead65a1261e8a57b00afdb4d3e53eff9-merged.mount: Deactivated successfully.
Dec  3 16:40:06 np0005544708 podman[259809]: 2025-12-03 21:40:06.884889571 +0000 UTC m=+0.238708800 container remove 6e92615f2c58e04fe85d5692ca3037893cded05f01f43a4c9ef3962bc74575a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_dhawan, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:40:06 np0005544708 systemd[1]: libpod-conmon-6e92615f2c58e04fe85d5692ca3037893cded05f01f43a4c9ef3962bc74575a6.scope: Deactivated successfully.
Dec  3 16:40:07 np0005544708 podman[259849]: 2025-12-03 21:40:07.140843583 +0000 UTC m=+0.064685407 container create b013c2f44cca706b7d5c86b2d6a4bcb2f32162f90fa3025c249a4fd47d198434 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_rubin, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:40:07 np0005544708 systemd[1]: Started libpod-conmon-b013c2f44cca706b7d5c86b2d6a4bcb2f32162f90fa3025c249a4fd47d198434.scope.
Dec  3 16:40:07 np0005544708 podman[259849]: 2025-12-03 21:40:07.11427715 +0000 UTC m=+0.038119024 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:40:07 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:40:07 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19a5b32ee8a617225352ab051ed29d7244005b639a8290455e2013b32a175f0c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:40:07 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19a5b32ee8a617225352ab051ed29d7244005b639a8290455e2013b32a175f0c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:40:07 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19a5b32ee8a617225352ab051ed29d7244005b639a8290455e2013b32a175f0c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:40:07 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19a5b32ee8a617225352ab051ed29d7244005b639a8290455e2013b32a175f0c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:40:07 np0005544708 podman[259849]: 2025-12-03 21:40:07.260393773 +0000 UTC m=+0.184235647 container init b013c2f44cca706b7d5c86b2d6a4bcb2f32162f90fa3025c249a4fd47d198434 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  3 16:40:07 np0005544708 podman[259849]: 2025-12-03 21:40:07.277268156 +0000 UTC m=+0.201109980 container start b013c2f44cca706b7d5c86b2d6a4bcb2f32162f90fa3025c249a4fd47d198434 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec  3 16:40:07 np0005544708 podman[259849]: 2025-12-03 21:40:07.281114439 +0000 UTC m=+0.204956253 container attach b013c2f44cca706b7d5c86b2d6a4bcb2f32162f90fa3025c249a4fd47d198434 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_rubin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  3 16:40:07 np0005544708 cool_rubin[259865]: {
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:    "0": [
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:        {
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:            "devices": [
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "/dev/loop3"
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:            ],
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:            "lv_name": "ceph_lv0",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:            "lv_size": "21470642176",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d33bc95-baf8-481d-bc78-3b15ffd29872,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:            "lv_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:            "name": "ceph_lv0",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:            "tags": {
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.block_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.cluster_name": "ceph",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.crush_device_class": "",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.encrypted": "0",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.objectstore": "bluestore",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.osd_fsid": "4d33bc95-baf8-481d-bc78-3b15ffd29872",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.osd_id": "0",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.type": "block",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.vdo": "0",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.with_tpm": "0"
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:            },
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:            "type": "block",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:            "vg_name": "ceph_vg0"
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:        }
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:    ],
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:    "1": [
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:        {
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:            "devices": [
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "/dev/loop4"
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:            ],
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:            "lv_name": "ceph_lv1",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:            "lv_size": "21470642176",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c4086f1b-ff53-4e63-8dc0-011238d77976,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:            "lv_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:            "name": "ceph_lv1",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:            "tags": {
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.block_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.cluster_name": "ceph",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.crush_device_class": "",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.encrypted": "0",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.objectstore": "bluestore",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.osd_fsid": "c4086f1b-ff53-4e63-8dc0-011238d77976",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.osd_id": "1",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.type": "block",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.vdo": "0",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.with_tpm": "0"
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:            },
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:            "type": "block",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:            "vg_name": "ceph_vg1"
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:        }
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:    ],
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:    "2": [
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:        {
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:            "devices": [
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "/dev/loop5"
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:            ],
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:            "lv_name": "ceph_lv2",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:            "lv_size": "21470642176",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=abcd6a67-9013-4470-978f-f75da5f33cd4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:            "lv_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:            "name": "ceph_lv2",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:            "tags": {
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.block_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.cluster_name": "ceph",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.crush_device_class": "",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.encrypted": "0",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.objectstore": "bluestore",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.osd_fsid": "abcd6a67-9013-4470-978f-f75da5f33cd4",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.osd_id": "2",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.type": "block",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.vdo": "0",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:                "ceph.with_tpm": "0"
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:            },
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:            "type": "block",
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:            "vg_name": "ceph_vg2"
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:        }
Dec  3 16:40:07 np0005544708 cool_rubin[259865]:    ]
Dec  3 16:40:07 np0005544708 cool_rubin[259865]: }
Dec  3 16:40:07 np0005544708 systemd[1]: libpod-b013c2f44cca706b7d5c86b2d6a4bcb2f32162f90fa3025c249a4fd47d198434.scope: Deactivated successfully.
Dec  3 16:40:07 np0005544708 podman[259849]: 2025-12-03 21:40:07.655642535 +0000 UTC m=+0.579484389 container died b013c2f44cca706b7d5c86b2d6a4bcb2f32162f90fa3025c249a4fd47d198434 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:40:07 np0005544708 systemd[1]: var-lib-containers-storage-overlay-19a5b32ee8a617225352ab051ed29d7244005b639a8290455e2013b32a175f0c-merged.mount: Deactivated successfully.
Dec  3 16:40:07 np0005544708 podman[259849]: 2025-12-03 21:40:07.70799274 +0000 UTC m=+0.631834564 container remove b013c2f44cca706b7d5c86b2d6a4bcb2f32162f90fa3025c249a4fd47d198434 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_rubin, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030)
Dec  3 16:40:07 np0005544708 systemd[1]: libpod-conmon-b013c2f44cca706b7d5c86b2d6a4bcb2f32162f90fa3025c249a4fd47d198434.scope: Deactivated successfully.
Dec  3 16:40:08 np0005544708 podman[259946]: 2025-12-03 21:40:08.26292065 +0000 UTC m=+0.066241489 container create 22e4b35787a535b674abcaf5f67cc255f14d94ce6ed4bb9feea2579740278897 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_kapitsa, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:40:08 np0005544708 systemd[1]: Started libpod-conmon-22e4b35787a535b674abcaf5f67cc255f14d94ce6ed4bb9feea2579740278897.scope.
Dec  3 16:40:08 np0005544708 podman[259946]: 2025-12-03 21:40:08.237393234 +0000 UTC m=+0.040714133 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:40:08 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:40:08 np0005544708 podman[259946]: 2025-12-03 21:40:08.360810948 +0000 UTC m=+0.164131837 container init 22e4b35787a535b674abcaf5f67cc255f14d94ce6ed4bb9feea2579740278897 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_kapitsa, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:40:08 np0005544708 podman[259946]: 2025-12-03 21:40:08.371874395 +0000 UTC m=+0.175195244 container start 22e4b35787a535b674abcaf5f67cc255f14d94ce6ed4bb9feea2579740278897 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_kapitsa, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 16:40:08 np0005544708 podman[259946]: 2025-12-03 21:40:08.377318701 +0000 UTC m=+0.180639550 container attach 22e4b35787a535b674abcaf5f67cc255f14d94ce6ed4bb9feea2579740278897 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_kapitsa, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:40:08 np0005544708 optimistic_kapitsa[259962]: 167 167
Dec  3 16:40:08 np0005544708 systemd[1]: libpod-22e4b35787a535b674abcaf5f67cc255f14d94ce6ed4bb9feea2579740278897.scope: Deactivated successfully.
Dec  3 16:40:08 np0005544708 podman[259946]: 2025-12-03 21:40:08.380026374 +0000 UTC m=+0.183347213 container died 22e4b35787a535b674abcaf5f67cc255f14d94ce6ed4bb9feea2579740278897 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  3 16:40:08 np0005544708 systemd[1]: var-lib-containers-storage-overlay-a751f788a6dfacf3c1cf03b2be8e567a344abc9f119f4f0f636a11b875dd5881-merged.mount: Deactivated successfully.
Dec  3 16:40:08 np0005544708 podman[259946]: 2025-12-03 21:40:08.431425654 +0000 UTC m=+0.234746503 container remove 22e4b35787a535b674abcaf5f67cc255f14d94ce6ed4bb9feea2579740278897 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_kapitsa, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:40:08 np0005544708 systemd[1]: libpod-conmon-22e4b35787a535b674abcaf5f67cc255f14d94ce6ed4bb9feea2579740278897.scope: Deactivated successfully.
Dec  3 16:40:08 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1054: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:40:08 np0005544708 podman[259986]: 2025-12-03 21:40:08.674763408 +0000 UTC m=+0.074474311 container create aa42f17ec442a7a2017b9347bd00c28aedfbac78c2661eb1dce7f47d6d403316 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_haslett, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:40:08 np0005544708 systemd[1]: Started libpod-conmon-aa42f17ec442a7a2017b9347bd00c28aedfbac78c2661eb1dce7f47d6d403316.scope.
Dec  3 16:40:08 np0005544708 podman[259986]: 2025-12-03 21:40:08.642908202 +0000 UTC m=+0.042619115 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:40:08 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:40:08 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a17fd44a83bae8b84c089e3b78844d4b599816cb66ab8c64f60643483c710d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:40:08 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a17fd44a83bae8b84c089e3b78844d4b599816cb66ab8c64f60643483c710d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:40:08 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a17fd44a83bae8b84c089e3b78844d4b599816cb66ab8c64f60643483c710d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:40:08 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a17fd44a83bae8b84c089e3b78844d4b599816cb66ab8c64f60643483c710d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:40:08 np0005544708 podman[259986]: 2025-12-03 21:40:08.777692581 +0000 UTC m=+0.177403454 container init aa42f17ec442a7a2017b9347bd00c28aedfbac78c2661eb1dce7f47d6d403316 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:40:08 np0005544708 podman[259986]: 2025-12-03 21:40:08.792166049 +0000 UTC m=+0.191876952 container start aa42f17ec442a7a2017b9347bd00c28aedfbac78c2661eb1dce7f47d6d403316 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:40:08 np0005544708 podman[259986]: 2025-12-03 21:40:08.796442164 +0000 UTC m=+0.196153047 container attach aa42f17ec442a7a2017b9347bd00c28aedfbac78c2661eb1dce7f47d6d403316 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_haslett, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  3 16:40:09 np0005544708 lvm[260083]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:40:09 np0005544708 lvm[260084]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:40:09 np0005544708 lvm[260080]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:40:09 np0005544708 lvm[260080]: VG ceph_vg0 finished
Dec  3 16:40:09 np0005544708 lvm[260083]: VG ceph_vg1 finished
Dec  3 16:40:09 np0005544708 lvm[260084]: VG ceph_vg2 finished
Dec  3 16:40:09 np0005544708 pedantic_haslett[260003]: {}
Dec  3 16:40:09 np0005544708 systemd[1]: libpod-aa42f17ec442a7a2017b9347bd00c28aedfbac78c2661eb1dce7f47d6d403316.scope: Deactivated successfully.
Dec  3 16:40:09 np0005544708 systemd[1]: libpod-aa42f17ec442a7a2017b9347bd00c28aedfbac78c2661eb1dce7f47d6d403316.scope: Consumed 1.496s CPU time.
Dec  3 16:40:09 np0005544708 podman[260087]: 2025-12-03 21:40:09.729326442 +0000 UTC m=+0.032349010 container died aa42f17ec442a7a2017b9347bd00c28aedfbac78c2661eb1dce7f47d6d403316 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_haslett, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Dec  3 16:40:09 np0005544708 systemd[1]: var-lib-containers-storage-overlay-7a17fd44a83bae8b84c089e3b78844d4b599816cb66ab8c64f60643483c710d6-merged.mount: Deactivated successfully.
Dec  3 16:40:09 np0005544708 podman[260087]: 2025-12-03 21:40:09.777275129 +0000 UTC m=+0.080297657 container remove aa42f17ec442a7a2017b9347bd00c28aedfbac78c2661eb1dce7f47d6d403316 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  3 16:40:09 np0005544708 systemd[1]: libpod-conmon-aa42f17ec442a7a2017b9347bd00c28aedfbac78c2661eb1dce7f47d6d403316.scope: Deactivated successfully.
Dec  3 16:40:09 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:40:09 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:40:09 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:40:09 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:40:10 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:40:10 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:40:10 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1055: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:40:11 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:40:12 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1056: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:40:14 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1057: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:40:16 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1058: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:40:16 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:40:18 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1059: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:40:20 np0005544708 podman[260128]: 2025-12-03 21:40:20.172694608 +0000 UTC m=+0.098298200 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  3 16:40:20 np0005544708 podman[260127]: 2025-12-03 21:40:20.184986758 +0000 UTC m=+0.110573439 container health_status 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  3 16:40:20 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1060: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:40:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Optimize plan auto_2025-12-03_21:40:21
Dec  3 16:40:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 16:40:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] do_upmap
Dec  3 16:40:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] pools ['vms', 'volumes', 'backups', 'cephfs.cephfs.meta', 'images', '.mgr', 'cephfs.cephfs.data']
Dec  3 16:40:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] prepared 0/10 upmap changes
Dec  3 16:40:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:40:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:40:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:40:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:40:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:40:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:40:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:40:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 16:40:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 16:40:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:40:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:40:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:40:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:40:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:40:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:40:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:40:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:40:22 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1061: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:40:24 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1062: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:40:24 np0005544708 nova_compute[241566]: 2025-12-03 21:40:24.565 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:40:25 np0005544708 nova_compute[241566]: 2025-12-03 21:40:25.551 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:40:26 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1063: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:40:26 np0005544708 nova_compute[241566]: 2025-12-03 21:40:26.551 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:40:26 np0005544708 nova_compute[241566]: 2025-12-03 21:40:26.552 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 16:40:26 np0005544708 nova_compute[241566]: 2025-12-03 21:40:26.552 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 16:40:26 np0005544708 nova_compute[241566]: 2025-12-03 21:40:26.572 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 16:40:26 np0005544708 nova_compute[241566]: 2025-12-03 21:40:26.572 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:40:26 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:40:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 16:40:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:40:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 16:40:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:40:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 3.533351853729544e-07 of space, bias 1.0, pg target 0.00010600055561188632 quantized to 32 (current 32)
Dec  3 16:40:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:40:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.833582600959152e-08 of space, bias 1.0, pg target 2.9500747802877454e-05 quantized to 32 (current 32)
Dec  3 16:40:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:40:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:40:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:40:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006678894471709225 of space, bias 1.0, pg target 0.20036683415127676 quantized to 32 (current 32)
Dec  3 16:40:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:40:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.953496000112683e-07 of space, bias 4.0, pg target 0.0009544195200135219 quantized to 16 (current 16)
Dec  3 16:40:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:40:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:40:28 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1064: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:40:28 np0005544708 nova_compute[241566]: 2025-12-03 21:40:28.551 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:40:28 np0005544708 nova_compute[241566]: 2025-12-03 21:40:28.552 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:40:30 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1065: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:40:30 np0005544708 nova_compute[241566]: 2025-12-03 21:40:30.551 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:40:30 np0005544708 nova_compute[241566]: 2025-12-03 21:40:30.552 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:40:30 np0005544708 nova_compute[241566]: 2025-12-03 21:40:30.552 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 16:40:30 np0005544708 nova_compute[241566]: 2025-12-03 21:40:30.553 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:40:30 np0005544708 nova_compute[241566]: 2025-12-03 21:40:30.584 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:40:30 np0005544708 nova_compute[241566]: 2025-12-03 21:40:30.584 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:40:30 np0005544708 nova_compute[241566]: 2025-12-03 21:40:30.585 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:40:30 np0005544708 nova_compute[241566]: 2025-12-03 21:40:30.585 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 16:40:30 np0005544708 nova_compute[241566]: 2025-12-03 21:40:30.586 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:40:31 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  3 16:40:31 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4079128885' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  3 16:40:31 np0005544708 nova_compute[241566]: 2025-12-03 21:40:31.122 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:40:31 np0005544708 nova_compute[241566]: 2025-12-03 21:40:31.343 241570 WARNING nova.virt.libvirt.driver [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 16:40:31 np0005544708 nova_compute[241566]: 2025-12-03 21:40:31.344 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5114MB free_disk=59.988260054029524GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 16:40:31 np0005544708 nova_compute[241566]: 2025-12-03 21:40:31.344 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:40:31 np0005544708 nova_compute[241566]: 2025-12-03 21:40:31.344 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:40:31 np0005544708 nova_compute[241566]: 2025-12-03 21:40:31.421 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 16:40:31 np0005544708 nova_compute[241566]: 2025-12-03 21:40:31.421 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 16:40:31 np0005544708 nova_compute[241566]: 2025-12-03 21:40:31.451 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:40:31 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:40:32 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  3 16:40:32 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2670689828' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  3 16:40:32 np0005544708 nova_compute[241566]: 2025-12-03 21:40:32.024 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:40:32 np0005544708 nova_compute[241566]: 2025-12-03 21:40:32.030 241570 DEBUG nova.compute.provider_tree [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Inventory has not changed in ProviderTree for provider: 94aba67c-5c5e-45d0-83d1-33eb467c8775 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 16:40:32 np0005544708 nova_compute[241566]: 2025-12-03 21:40:32.054 241570 DEBUG nova.scheduler.client.report [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Inventory has not changed for provider 94aba67c-5c5e-45d0-83d1-33eb467c8775 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 16:40:32 np0005544708 nova_compute[241566]: 2025-12-03 21:40:32.057 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 16:40:32 np0005544708 nova_compute[241566]: 2025-12-03 21:40:32.058 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:40:32 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1066: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:40:34 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1067: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:40:36 np0005544708 podman[260210]: 2025-12-03 21:40:36.207342865 +0000 UTC m=+0.144040229 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  3 16:40:36 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1068: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:40:36 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:40:38 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1069: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:40:40 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1070: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:40:41 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:40:42 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1071: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:40:44 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1072: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:40:46 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1073: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:40:46 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:40:48 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1074: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:40:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:40:48.945 151937 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:40:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:40:48.946 151937 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:40:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:40:48.946 151937 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:40:50 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1075: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:40:51 np0005544708 podman[260237]: 2025-12-03 21:40:51.159918178 +0000 UTC m=+0.084946131 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:40:51 np0005544708 podman[260236]: 2025-12-03 21:40:51.163145705 +0000 UTC m=+0.091330253 container health_status 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  3 16:40:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:40:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:40:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:40:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:40:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:40:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:40:51 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:40:52 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1076: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:40:54 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1077: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:40:56 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1078: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:40:56 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:40:58 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1079: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:40:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  3 16:40:59 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/222267683' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec  3 16:40:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  3 16:40:59 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/222267683' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec  3 16:41:00 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1080: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:41:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:41:02 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1081: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:41:04 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1082: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:41:06 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1083: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:41:06 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:41:07 np0005544708 podman[260276]: 2025-12-03 21:41:07.185507285 +0000 UTC m=+0.121284167 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  3 16:41:08 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1084: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:41:10 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1085: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:41:10 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:41:10 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:41:10 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  3 16:41:10 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:41:10 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  3 16:41:10 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:41:10 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  3 16:41:10 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  3 16:41:10 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  3 16:41:10 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:41:10 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:41:10 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:41:11 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:41:11 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:41:11 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:41:11 np0005544708 podman[260445]: 2025-12-03 21:41:11.40224756 +0000 UTC m=+0.057817482 container create 9c668b31aa3ca8d5d216ef6e9ea4faac47eb318b7112416b8262ae4f791a930f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:41:11 np0005544708 systemd[1]: Started libpod-conmon-9c668b31aa3ca8d5d216ef6e9ea4faac47eb318b7112416b8262ae4f791a930f.scope.
Dec  3 16:41:11 np0005544708 podman[260445]: 2025-12-03 21:41:11.374908396 +0000 UTC m=+0.030478388 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:41:11 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:41:11 np0005544708 podman[260445]: 2025-12-03 21:41:11.505131373 +0000 UTC m=+0.160701335 container init 9c668b31aa3ca8d5d216ef6e9ea4faac47eb318b7112416b8262ae4f791a930f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_goldstine, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec  3 16:41:11 np0005544708 podman[260445]: 2025-12-03 21:41:11.517840404 +0000 UTC m=+0.173410326 container start 9c668b31aa3ca8d5d216ef6e9ea4faac47eb318b7112416b8262ae4f791a930f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_goldstine, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:41:11 np0005544708 podman[260445]: 2025-12-03 21:41:11.521672897 +0000 UTC m=+0.177242829 container attach 9c668b31aa3ca8d5d216ef6e9ea4faac47eb318b7112416b8262ae4f791a930f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_goldstine, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  3 16:41:11 np0005544708 naughty_goldstine[260461]: 167 167
Dec  3 16:41:11 np0005544708 systemd[1]: libpod-9c668b31aa3ca8d5d216ef6e9ea4faac47eb318b7112416b8262ae4f791a930f.scope: Deactivated successfully.
Dec  3 16:41:11 np0005544708 podman[260445]: 2025-12-03 21:41:11.529042624 +0000 UTC m=+0.184612556 container died 9c668b31aa3ca8d5d216ef6e9ea4faac47eb318b7112416b8262ae4f791a930f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_goldstine, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:41:11 np0005544708 systemd[1]: var-lib-containers-storage-overlay-6678d0ab2994b4fd6426d3be979da4602aad0471c3ab23489cda282537a75fdb-merged.mount: Deactivated successfully.
Dec  3 16:41:11 np0005544708 podman[260445]: 2025-12-03 21:41:11.586067326 +0000 UTC m=+0.241637258 container remove 9c668b31aa3ca8d5d216ef6e9ea4faac47eb318b7112416b8262ae4f791a930f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  3 16:41:11 np0005544708 systemd[1]: libpod-conmon-9c668b31aa3ca8d5d216ef6e9ea4faac47eb318b7112416b8262ae4f791a930f.scope: Deactivated successfully.
Dec  3 16:41:11 np0005544708 podman[260484]: 2025-12-03 21:41:11.806796503 +0000 UTC m=+0.059758676 container create ac2d32ffef965c3d707319e5066597ae1634dd4933eb69e657e29624a5e23ff4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_diffie, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:41:11 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:41:11 np0005544708 systemd[1]: Started libpod-conmon-ac2d32ffef965c3d707319e5066597ae1634dd4933eb69e657e29624a5e23ff4.scope.
Dec  3 16:41:11 np0005544708 podman[260484]: 2025-12-03 21:41:11.784638138 +0000 UTC m=+0.037600301 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:41:11 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:41:11 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d14f91001be773318ca5b8083375a223d19971159ee726e68cad2de8a309e8fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:41:11 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d14f91001be773318ca5b8083375a223d19971159ee726e68cad2de8a309e8fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:41:11 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d14f91001be773318ca5b8083375a223d19971159ee726e68cad2de8a309e8fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:41:11 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d14f91001be773318ca5b8083375a223d19971159ee726e68cad2de8a309e8fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:41:11 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d14f91001be773318ca5b8083375a223d19971159ee726e68cad2de8a309e8fa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:41:11 np0005544708 podman[260484]: 2025-12-03 21:41:11.926396673 +0000 UTC m=+0.179358856 container init ac2d32ffef965c3d707319e5066597ae1634dd4933eb69e657e29624a5e23ff4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec  3 16:41:11 np0005544708 podman[260484]: 2025-12-03 21:41:11.940159903 +0000 UTC m=+0.193122086 container start ac2d32ffef965c3d707319e5066597ae1634dd4933eb69e657e29624a5e23ff4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_diffie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec  3 16:41:11 np0005544708 podman[260484]: 2025-12-03 21:41:11.94488533 +0000 UTC m=+0.197847503 container attach ac2d32ffef965c3d707319e5066597ae1634dd4933eb69e657e29624a5e23ff4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_diffie, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:41:12 np0005544708 nervous_diffie[260502]: --> passed data devices: 0 physical, 3 LVM
Dec  3 16:41:12 np0005544708 nervous_diffie[260502]: --> All data devices are unavailable
Dec  3 16:41:12 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1086: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:41:12 np0005544708 systemd[1]: libpod-ac2d32ffef965c3d707319e5066597ae1634dd4933eb69e657e29624a5e23ff4.scope: Deactivated successfully.
Dec  3 16:41:12 np0005544708 podman[260484]: 2025-12-03 21:41:12.571421772 +0000 UTC m=+0.824383955 container died ac2d32ffef965c3d707319e5066597ae1634dd4933eb69e657e29624a5e23ff4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_diffie, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:41:12 np0005544708 systemd[1]: var-lib-containers-storage-overlay-d14f91001be773318ca5b8083375a223d19971159ee726e68cad2de8a309e8fa-merged.mount: Deactivated successfully.
Dec  3 16:41:12 np0005544708 podman[260484]: 2025-12-03 21:41:12.628798302 +0000 UTC m=+0.881760485 container remove ac2d32ffef965c3d707319e5066597ae1634dd4933eb69e657e29624a5e23ff4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_diffie, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  3 16:41:12 np0005544708 systemd[1]: libpod-conmon-ac2d32ffef965c3d707319e5066597ae1634dd4933eb69e657e29624a5e23ff4.scope: Deactivated successfully.
Dec  3 16:41:13 np0005544708 podman[260594]: 2025-12-03 21:41:13.174805652 +0000 UTC m=+0.064118522 container create f7028cc1ef14a9bdf3977e6facfae2a1a54a92cdf3e72058776ec7d1a859485f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec  3 16:41:13 np0005544708 systemd[1]: Started libpod-conmon-f7028cc1ef14a9bdf3977e6facfae2a1a54a92cdf3e72058776ec7d1a859485f.scope.
Dec  3 16:41:13 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:41:13 np0005544708 podman[260594]: 2025-12-03 21:41:13.156417968 +0000 UTC m=+0.045730838 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:41:13 np0005544708 podman[260594]: 2025-12-03 21:41:13.267290085 +0000 UTC m=+0.156602965 container init f7028cc1ef14a9bdf3977e6facfae2a1a54a92cdf3e72058776ec7d1a859485f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_euclid, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:41:13 np0005544708 podman[260594]: 2025-12-03 21:41:13.278832855 +0000 UTC m=+0.168145735 container start f7028cc1ef14a9bdf3977e6facfae2a1a54a92cdf3e72058776ec7d1a859485f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec  3 16:41:13 np0005544708 podman[260594]: 2025-12-03 21:41:13.282685128 +0000 UTC m=+0.171998078 container attach f7028cc1ef14a9bdf3977e6facfae2a1a54a92cdf3e72058776ec7d1a859485f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:41:13 np0005544708 kind_euclid[260610]: 167 167
Dec  3 16:41:13 np0005544708 podman[260594]: 2025-12-03 21:41:13.286204113 +0000 UTC m=+0.175516993 container died f7028cc1ef14a9bdf3977e6facfae2a1a54a92cdf3e72058776ec7d1a859485f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_euclid, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  3 16:41:13 np0005544708 systemd[1]: libpod-f7028cc1ef14a9bdf3977e6facfae2a1a54a92cdf3e72058776ec7d1a859485f.scope: Deactivated successfully.
Dec  3 16:41:13 np0005544708 systemd[1]: var-lib-containers-storage-overlay-3f4ee4a47f09b3c5a564807046da8cfb1dc2c100c727072d6f0446a3a56f5793-merged.mount: Deactivated successfully.
Dec  3 16:41:13 np0005544708 podman[260594]: 2025-12-03 21:41:13.328310904 +0000 UTC m=+0.217623804 container remove f7028cc1ef14a9bdf3977e6facfae2a1a54a92cdf3e72058776ec7d1a859485f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_euclid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  3 16:41:13 np0005544708 systemd[1]: libpod-conmon-f7028cc1ef14a9bdf3977e6facfae2a1a54a92cdf3e72058776ec7d1a859485f.scope: Deactivated successfully.
Dec  3 16:41:13 np0005544708 podman[260636]: 2025-12-03 21:41:13.529466204 +0000 UTC m=+0.041114564 container create 0b55453663435efd9fc94e1626184dd3359b304f7154f55eb28ff0e37ad4aac1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True)
Dec  3 16:41:13 np0005544708 systemd[1]: Started libpod-conmon-0b55453663435efd9fc94e1626184dd3359b304f7154f55eb28ff0e37ad4aac1.scope.
Dec  3 16:41:13 np0005544708 podman[260636]: 2025-12-03 21:41:13.511646206 +0000 UTC m=+0.023294556 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:41:13 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:41:13 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c577a8a24eb18401172afd33f03f1e1545a83997215e5e74c176c97fda66159/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:41:13 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c577a8a24eb18401172afd33f03f1e1545a83997215e5e74c176c97fda66159/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:41:13 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c577a8a24eb18401172afd33f03f1e1545a83997215e5e74c176c97fda66159/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:41:13 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c577a8a24eb18401172afd33f03f1e1545a83997215e5e74c176c97fda66159/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:41:13 np0005544708 podman[260636]: 2025-12-03 21:41:13.650065463 +0000 UTC m=+0.161713813 container init 0b55453663435efd9fc94e1626184dd3359b304f7154f55eb28ff0e37ad4aac1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_fermat, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True)
Dec  3 16:41:13 np0005544708 podman[260636]: 2025-12-03 21:41:13.663536714 +0000 UTC m=+0.175185044 container start 0b55453663435efd9fc94e1626184dd3359b304f7154f55eb28ff0e37ad4aac1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_fermat, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Dec  3 16:41:13 np0005544708 podman[260636]: 2025-12-03 21:41:13.666530664 +0000 UTC m=+0.178179034 container attach 0b55453663435efd9fc94e1626184dd3359b304f7154f55eb28ff0e37ad4aac1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_fermat, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]: {
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:    "0": [
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:        {
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:            "devices": [
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "/dev/loop3"
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:            ],
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:            "lv_name": "ceph_lv0",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:            "lv_size": "21470642176",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d33bc95-baf8-481d-bc78-3b15ffd29872,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:            "lv_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:            "name": "ceph_lv0",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:            "tags": {
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.block_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.cluster_name": "ceph",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.crush_device_class": "",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.encrypted": "0",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.objectstore": "bluestore",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.osd_fsid": "4d33bc95-baf8-481d-bc78-3b15ffd29872",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.osd_id": "0",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.type": "block",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.vdo": "0",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.with_tpm": "0"
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:            },
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:            "type": "block",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:            "vg_name": "ceph_vg0"
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:        }
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:    ],
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:    "1": [
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:        {
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:            "devices": [
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "/dev/loop4"
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:            ],
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:            "lv_name": "ceph_lv1",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:            "lv_size": "21470642176",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c4086f1b-ff53-4e63-8dc0-011238d77976,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:            "lv_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:            "name": "ceph_lv1",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:            "tags": {
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.block_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.cluster_name": "ceph",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.crush_device_class": "",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.encrypted": "0",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.objectstore": "bluestore",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.osd_fsid": "c4086f1b-ff53-4e63-8dc0-011238d77976",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.osd_id": "1",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.type": "block",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.vdo": "0",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.with_tpm": "0"
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:            },
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:            "type": "block",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:            "vg_name": "ceph_vg1"
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:        }
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:    ],
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:    "2": [
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:        {
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:            "devices": [
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "/dev/loop5"
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:            ],
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:            "lv_name": "ceph_lv2",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:            "lv_size": "21470642176",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=abcd6a67-9013-4470-978f-f75da5f33cd4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:            "lv_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:            "name": "ceph_lv2",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:            "tags": {
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.block_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.cluster_name": "ceph",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.crush_device_class": "",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.encrypted": "0",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.objectstore": "bluestore",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.osd_fsid": "abcd6a67-9013-4470-978f-f75da5f33cd4",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.osd_id": "2",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.type": "block",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.vdo": "0",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:                "ceph.with_tpm": "0"
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:            },
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:            "type": "block",
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:            "vg_name": "ceph_vg2"
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:        }
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]:    ]
Dec  3 16:41:13 np0005544708 stupefied_fermat[260652]: }
Dec  3 16:41:13 np0005544708 systemd[1]: libpod-0b55453663435efd9fc94e1626184dd3359b304f7154f55eb28ff0e37ad4aac1.scope: Deactivated successfully.
Dec  3 16:41:14 np0005544708 podman[260636]: 2025-12-03 21:41:14.000652256 +0000 UTC m=+0.512300656 container died 0b55453663435efd9fc94e1626184dd3359b304f7154f55eb28ff0e37ad4aac1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_fermat, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec  3 16:41:14 np0005544708 systemd[1]: var-lib-containers-storage-overlay-0c577a8a24eb18401172afd33f03f1e1545a83997215e5e74c176c97fda66159-merged.mount: Deactivated successfully.
Dec  3 16:41:14 np0005544708 podman[260636]: 2025-12-03 21:41:14.05146333 +0000 UTC m=+0.563111660 container remove 0b55453663435efd9fc94e1626184dd3359b304f7154f55eb28ff0e37ad4aac1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  3 16:41:14 np0005544708 systemd[1]: libpod-conmon-0b55453663435efd9fc94e1626184dd3359b304f7154f55eb28ff0e37ad4aac1.scope: Deactivated successfully.
Dec  3 16:41:14 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1087: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:41:14 np0005544708 podman[260734]: 2025-12-03 21:41:14.705162811 +0000 UTC m=+0.067566085 container create 11f0139ece34d6c1a710545ffddcb249b216d95583f6a9938ad20422f9b0ef27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Dec  3 16:41:14 np0005544708 systemd[1]: Started libpod-conmon-11f0139ece34d6c1a710545ffddcb249b216d95583f6a9938ad20422f9b0ef27.scope.
Dec  3 16:41:14 np0005544708 podman[260734]: 2025-12-03 21:41:14.676549683 +0000 UTC m=+0.038953007 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:41:14 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:41:14 np0005544708 podman[260734]: 2025-12-03 21:41:14.799551145 +0000 UTC m=+0.161954479 container init 11f0139ece34d6c1a710545ffddcb249b216d95583f6a9938ad20422f9b0ef27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_beaver, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec  3 16:41:14 np0005544708 podman[260734]: 2025-12-03 21:41:14.806654345 +0000 UTC m=+0.169057599 container start 11f0139ece34d6c1a710545ffddcb249b216d95583f6a9938ad20422f9b0ef27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_beaver, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:41:14 np0005544708 podman[260734]: 2025-12-03 21:41:14.809809731 +0000 UTC m=+0.172222115 container attach 11f0139ece34d6c1a710545ffddcb249b216d95583f6a9938ad20422f9b0ef27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_beaver, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:41:14 np0005544708 angry_beaver[260752]: 167 167
Dec  3 16:41:14 np0005544708 systemd[1]: libpod-11f0139ece34d6c1a710545ffddcb249b216d95583f6a9938ad20422f9b0ef27.scope: Deactivated successfully.
Dec  3 16:41:14 np0005544708 podman[260734]: 2025-12-03 21:41:14.814007303 +0000 UTC m=+0.176410587 container died 11f0139ece34d6c1a710545ffddcb249b216d95583f6a9938ad20422f9b0ef27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_beaver, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:41:14 np0005544708 systemd[1]: var-lib-containers-storage-overlay-2b1b3550cb6978738dd58861858b033f909702fabd0c508b241a80658426b615-merged.mount: Deactivated successfully.
Dec  3 16:41:14 np0005544708 podman[260734]: 2025-12-03 21:41:14.868435114 +0000 UTC m=+0.230838368 container remove 11f0139ece34d6c1a710545ffddcb249b216d95583f6a9938ad20422f9b0ef27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_beaver, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec  3 16:41:14 np0005544708 systemd[1]: libpod-conmon-11f0139ece34d6c1a710545ffddcb249b216d95583f6a9938ad20422f9b0ef27.scope: Deactivated successfully.
Dec  3 16:41:15 np0005544708 podman[260775]: 2025-12-03 21:41:15.136005469 +0000 UTC m=+0.069347884 container create 867e880f3cc2c8f5e1e597e9859eeb5e8dfa80f75b62a98ef9135e004c80f23f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_mclean, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:41:15 np0005544708 systemd[1]: Started libpod-conmon-867e880f3cc2c8f5e1e597e9859eeb5e8dfa80f75b62a98ef9135e004c80f23f.scope.
Dec  3 16:41:15 np0005544708 podman[260775]: 2025-12-03 21:41:15.107345449 +0000 UTC m=+0.040687914 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:41:15 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:41:15 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e3732a8b3ba9becb6e7b6ef80068bfb9268f343510c2b11d3336355abf44142/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:41:15 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e3732a8b3ba9becb6e7b6ef80068bfb9268f343510c2b11d3336355abf44142/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:41:15 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e3732a8b3ba9becb6e7b6ef80068bfb9268f343510c2b11d3336355abf44142/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:41:15 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e3732a8b3ba9becb6e7b6ef80068bfb9268f343510c2b11d3336355abf44142/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:41:15 np0005544708 podman[260775]: 2025-12-03 21:41:15.259052842 +0000 UTC m=+0.192395267 container init 867e880f3cc2c8f5e1e597e9859eeb5e8dfa80f75b62a98ef9135e004c80f23f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 16:41:15 np0005544708 podman[260775]: 2025-12-03 21:41:15.271952818 +0000 UTC m=+0.205295223 container start 867e880f3cc2c8f5e1e597e9859eeb5e8dfa80f75b62a98ef9135e004c80f23f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:41:15 np0005544708 podman[260775]: 2025-12-03 21:41:15.276539772 +0000 UTC m=+0.209882237 container attach 867e880f3cc2c8f5e1e597e9859eeb5e8dfa80f75b62a98ef9135e004c80f23f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_mclean, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True)
Dec  3 16:41:16 np0005544708 lvm[260868]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:41:16 np0005544708 lvm[260872]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:41:16 np0005544708 lvm[260868]: VG ceph_vg0 finished
Dec  3 16:41:16 np0005544708 lvm[260872]: VG ceph_vg2 finished
Dec  3 16:41:16 np0005544708 lvm[260871]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:41:16 np0005544708 lvm[260871]: VG ceph_vg1 finished
Dec  3 16:41:16 np0005544708 angry_mclean[260791]: {}
Dec  3 16:41:16 np0005544708 systemd[1]: libpod-867e880f3cc2c8f5e1e597e9859eeb5e8dfa80f75b62a98ef9135e004c80f23f.scope: Deactivated successfully.
Dec  3 16:41:16 np0005544708 podman[260775]: 2025-12-03 21:41:16.178011946 +0000 UTC m=+1.111354351 container died 867e880f3cc2c8f5e1e597e9859eeb5e8dfa80f75b62a98ef9135e004c80f23f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True)
Dec  3 16:41:16 np0005544708 systemd[1]: libpod-867e880f3cc2c8f5e1e597e9859eeb5e8dfa80f75b62a98ef9135e004c80f23f.scope: Consumed 1.499s CPU time.
Dec  3 16:41:16 np0005544708 systemd[1]: var-lib-containers-storage-overlay-2e3732a8b3ba9becb6e7b6ef80068bfb9268f343510c2b11d3336355abf44142-merged.mount: Deactivated successfully.
Dec  3 16:41:16 np0005544708 podman[260775]: 2025-12-03 21:41:16.51338213 +0000 UTC m=+1.446724535 container remove 867e880f3cc2c8f5e1e597e9859eeb5e8dfa80f75b62a98ef9135e004c80f23f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_mclean, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:41:16 np0005544708 systemd[1]: libpod-conmon-867e880f3cc2c8f5e1e597e9859eeb5e8dfa80f75b62a98ef9135e004c80f23f.scope: Deactivated successfully.
Dec  3 16:41:16 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1088: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:41:16 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:41:16 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:41:16 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:41:16 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:41:16 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:41:17 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:41:17 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:41:18 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1089: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:41:20 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1090: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:41:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Optimize plan auto_2025-12-03_21:41:21
Dec  3 16:41:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 16:41:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] do_upmap
Dec  3 16:41:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'backups', 'volumes', 'images', 'cephfs.cephfs.data', 'vms']
Dec  3 16:41:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] prepared 0/10 upmap changes
Dec  3 16:41:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:41:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:41:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:41:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:41:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:41:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:41:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:41:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 16:41:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:41:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 16:41:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:41:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:41:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:41:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:41:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:41:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:41:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:41:22 np0005544708 podman[260915]: 2025-12-03 21:41:22.191279636 +0000 UTC m=+0.105799941 container health_status 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  3 16:41:22 np0005544708 podman[260916]: 2025-12-03 21:41:22.20591732 +0000 UTC m=+0.120754623 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  3 16:41:22 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1091: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:41:24 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1092: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:41:26 np0005544708 nova_compute[241566]: 2025-12-03 21:41:26.053 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:41:26 np0005544708 nova_compute[241566]: 2025-12-03 21:41:26.054 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:41:26 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1093: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:41:26 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:41:27 np0005544708 nova_compute[241566]: 2025-12-03 21:41:27.551 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:41:27 np0005544708 nova_compute[241566]: 2025-12-03 21:41:27.551 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 16:41:27 np0005544708 nova_compute[241566]: 2025-12-03 21:41:27.551 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 16:41:27 np0005544708 nova_compute[241566]: 2025-12-03 21:41:27.566 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 16:41:27 np0005544708 nova_compute[241566]: 2025-12-03 21:41:27.567 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:41:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 16:41:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:41:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 16:41:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:41:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 3.533351853729544e-07 of space, bias 1.0, pg target 0.00010600055561188632 quantized to 32 (current 32)
Dec  3 16:41:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:41:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.833582600959152e-08 of space, bias 1.0, pg target 2.9500747802877454e-05 quantized to 32 (current 32)
Dec  3 16:41:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:41:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:41:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:41:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006678894471709225 of space, bias 1.0, pg target 0.20036683415127676 quantized to 32 (current 32)
Dec  3 16:41:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:41:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.953496000112683e-07 of space, bias 4.0, pg target 0.0009544195200135219 quantized to 16 (current 16)
Dec  3 16:41:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:41:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:41:28 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1094: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:41:28 np0005544708 nova_compute[241566]: 2025-12-03 21:41:28.551 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:41:28 np0005544708 nova_compute[241566]: 2025-12-03 21:41:28.552 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:41:30 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1095: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:41:30 np0005544708 nova_compute[241566]: 2025-12-03 21:41:30.551 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:41:31 np0005544708 nova_compute[241566]: 2025-12-03 21:41:31.551 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:41:31 np0005544708 nova_compute[241566]: 2025-12-03 21:41:31.551 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 16:41:31 np0005544708 nova_compute[241566]: 2025-12-03 21:41:31.552 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:41:31 np0005544708 nova_compute[241566]: 2025-12-03 21:41:31.585 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:41:31 np0005544708 nova_compute[241566]: 2025-12-03 21:41:31.586 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:41:31 np0005544708 nova_compute[241566]: 2025-12-03 21:41:31.586 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:41:31 np0005544708 nova_compute[241566]: 2025-12-03 21:41:31.586 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 16:41:31 np0005544708 nova_compute[241566]: 2025-12-03 21:41:31.587 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:41:31 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:41:32 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  3 16:41:32 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2014570269' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  3 16:41:32 np0005544708 nova_compute[241566]: 2025-12-03 21:41:32.176 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:41:32 np0005544708 nova_compute[241566]: 2025-12-03 21:41:32.372 241570 WARNING nova.virt.libvirt.driver [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 16:41:32 np0005544708 nova_compute[241566]: 2025-12-03 21:41:32.374 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5107MB free_disk=59.988260054029524GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 16:41:32 np0005544708 nova_compute[241566]: 2025-12-03 21:41:32.374 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:41:32 np0005544708 nova_compute[241566]: 2025-12-03 21:41:32.375 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:41:32 np0005544708 nova_compute[241566]: 2025-12-03 21:41:32.465 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 16:41:32 np0005544708 nova_compute[241566]: 2025-12-03 21:41:32.466 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 16:41:32 np0005544708 nova_compute[241566]: 2025-12-03 21:41:32.493 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:41:32 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1096: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:41:33 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  3 16:41:33 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2601065406' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  3 16:41:33 np0005544708 nova_compute[241566]: 2025-12-03 21:41:33.059 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:41:33 np0005544708 nova_compute[241566]: 2025-12-03 21:41:33.066 241570 DEBUG nova.compute.provider_tree [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Inventory has not changed in ProviderTree for provider: 94aba67c-5c5e-45d0-83d1-33eb467c8775 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 16:41:33 np0005544708 nova_compute[241566]: 2025-12-03 21:41:33.091 241570 DEBUG nova.scheduler.client.report [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Inventory has not changed for provider 94aba67c-5c5e-45d0-83d1-33eb467c8775 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 16:41:33 np0005544708 nova_compute[241566]: 2025-12-03 21:41:33.094 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 16:41:33 np0005544708 nova_compute[241566]: 2025-12-03 21:41:33.094 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:41:34 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1097: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:41:36 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1098: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:41:36 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:41:37 np0005544708 nova_compute[241566]: 2025-12-03 21:41:37.089 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:41:38 np0005544708 podman[260997]: 2025-12-03 21:41:38.203776708 +0000 UTC m=+0.136524336 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  3 16:41:38 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1099: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:41:40 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1100: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:41:41 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:41:42 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1101: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:41:44 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1102: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:41:46 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1103: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:41:46 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:41:48 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1104: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:41:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:41:48.949 151937 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:41:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:41:48.949 151937 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:41:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:41:48.950 151937 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:41:50 np0005544708 ceph-mon[75204]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Dec  3 16:41:50 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:41:50.388358) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  3 16:41:50 np0005544708 ceph-mon[75204]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Dec  3 16:41:50 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764798110388399, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2049, "num_deletes": 251, "total_data_size": 2398933, "memory_usage": 2443008, "flush_reason": "Manual Compaction"}
Dec  3 16:41:50 np0005544708 ceph-mon[75204]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Dec  3 16:41:50 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1105: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:41:50 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764798110575840, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 2315437, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20619, "largest_seqno": 22667, "table_properties": {"data_size": 2306200, "index_size": 5795, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18454, "raw_average_key_size": 19, "raw_value_size": 2287738, "raw_average_value_size": 2470, "num_data_blocks": 266, "num_entries": 926, "num_filter_entries": 926, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764797881, "oldest_key_time": 1764797881, "file_creation_time": 1764798110, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d83d641b-0db7-44b5-9540-349f4c36f664", "db_session_id": "YRQHTOJ9E78VAMDNI6U1", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Dec  3 16:41:50 np0005544708 ceph-mon[75204]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 188143 microseconds, and 10166 cpu microseconds.
Dec  3 16:41:50 np0005544708 ceph-mon[75204]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 16:41:50 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:41:50.576495) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 2315437 bytes OK
Dec  3 16:41:50 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:41:50.576523) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Dec  3 16:41:50 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:41:50.718081) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Dec  3 16:41:50 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:41:50.718131) EVENT_LOG_v1 {"time_micros": 1764798110718121, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  3 16:41:50 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:41:50.718160) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  3 16:41:50 np0005544708 ceph-mon[75204]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 2390363, prev total WAL file size 2390363, number of live WAL files 2.
Dec  3 16:41:50 np0005544708 ceph-mon[75204]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 16:41:50 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:41:50.719867) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Dec  3 16:41:50 np0005544708 ceph-mon[75204]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  3 16:41:50 np0005544708 ceph-mon[75204]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(2261KB)], [50(5707KB)]
Dec  3 16:41:50 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764798110719933, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 8159606, "oldest_snapshot_seqno": -1}
Dec  3 16:41:50 np0005544708 ceph-mon[75204]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4450 keys, 6932108 bytes, temperature: kUnknown
Dec  3 16:41:50 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764798110844688, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 6932108, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6898918, "index_size": 20984, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11141, "raw_key_size": 106555, "raw_average_key_size": 23, "raw_value_size": 6815629, "raw_average_value_size": 1531, "num_data_blocks": 893, "num_entries": 4450, "num_filter_entries": 4450, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764796079, "oldest_key_time": 0, "file_creation_time": 1764798110, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d83d641b-0db7-44b5-9540-349f4c36f664", "db_session_id": "YRQHTOJ9E78VAMDNI6U1", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Dec  3 16:41:50 np0005544708 ceph-mon[75204]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  3 16:41:50 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:41:50.844988) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 6932108 bytes
Dec  3 16:41:50 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:41:50.904089) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 65.4 rd, 55.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 5.6 +0.0 blob) out(6.6 +0.0 blob), read-write-amplify(6.5) write-amplify(3.0) OK, records in: 4964, records dropped: 514 output_compression: NoCompression
Dec  3 16:41:50 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:41:50.904123) EVENT_LOG_v1 {"time_micros": 1764798110904109, "job": 26, "event": "compaction_finished", "compaction_time_micros": 124840, "compaction_time_cpu_micros": 31827, "output_level": 6, "num_output_files": 1, "total_output_size": 6932108, "num_input_records": 4964, "num_output_records": 4450, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  3 16:41:50 np0005544708 ceph-mon[75204]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 16:41:50 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764798110905031, "job": 26, "event": "table_file_deletion", "file_number": 52}
Dec  3 16:41:50 np0005544708 ceph-mon[75204]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  3 16:41:50 np0005544708 ceph-mon[75204]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764798110907071, "job": 26, "event": "table_file_deletion", "file_number": 50}
Dec  3 16:41:50 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:41:50.719755) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:41:50 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:41:50.907149) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:41:50 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:41:50.907157) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:41:50 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:41:50.907160) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:41:50 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:41:50.907163) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:41:50 np0005544708 ceph-mon[75204]: rocksdb: (Original Log Time 2025/12/03-21:41:50.907166) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  3 16:41:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:41:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:41:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:41:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:41:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:41:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:41:51 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:41:52 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1106: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:41:53 np0005544708 podman[261025]: 2025-12-03 21:41:53.163167335 +0000 UTC m=+0.087064778 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  3 16:41:53 np0005544708 podman[261024]: 2025-12-03 21:41:53.170922734 +0000 UTC m=+0.100468879 container health_status 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd)
Dec  3 16:41:54 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1107: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:41:56 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1108: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:41:56 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:41:58 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1109: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:41:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  3 16:41:59 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1028768076' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec  3 16:41:59 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  3 16:41:59 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1028768076' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec  3 16:42:00 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1110: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:42:01 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:42:02 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1111: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:42:04 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1112: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:42:06 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1113: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:42:06 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:42:08 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1114: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:42:09 np0005544708 podman[261064]: 2025-12-03 21:42:09.246278646 +0000 UTC m=+0.177620310 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  3 16:42:10 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1115: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:42:11 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:42:12 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1116: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:42:13 np0005544708 nova_compute[241566]: 2025-12-03 21:42:13.739 241570 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 7.71 sec#033[00m
Dec  3 16:42:14 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1117: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:42:16 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1118: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:42:16 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:42:17 np0005544708 podman[261187]: 2025-12-03 21:42:17.459505453 +0000 UTC m=+0.101716542 container exec 5be1cf87f4450717250955c63021ef41dba8c1099adb634f3e1f6a0b130fa916 (image=quay.io/ceph/ceph:v20, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec  3 16:42:17 np0005544708 podman[261187]: 2025-12-03 21:42:17.575439476 +0000 UTC m=+0.217650555 container exec_died 5be1cf87f4450717250955c63021ef41dba8c1099adb634f3e1f6a0b130fa916 (image=quay.io/ceph/ceph:v20, name=ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mon-compute-0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec  3 16:42:18 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:42:18 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:42:18 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:42:18 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:42:18 np0005544708 nova_compute[241566]: 2025-12-03 21:42:18.551 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:42:18 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1119: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:42:19 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:42:19 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:42:19 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  3 16:42:19 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:42:19 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  3 16:42:19 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:42:19 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  3 16:42:19 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  3 16:42:19 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  3 16:42:19 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:42:19 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:42:19 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:42:19 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:42:19 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:42:19 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:42:19 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:42:19 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:42:19 np0005544708 podman[261500]: 2025-12-03 21:42:19.90041503 +0000 UTC m=+0.057376042 container create b9864daee99cad508d4098f09b004f426d81782a644711116faa4e1537476b12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:42:19 np0005544708 systemd[1]: Started libpod-conmon-b9864daee99cad508d4098f09b004f426d81782a644711116faa4e1537476b12.scope.
Dec  3 16:42:19 np0005544708 podman[261500]: 2025-12-03 21:42:19.880863075 +0000 UTC m=+0.037824067 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:42:19 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:42:19 np0005544708 podman[261500]: 2025-12-03 21:42:19.998114763 +0000 UTC m=+0.155075835 container init b9864daee99cad508d4098f09b004f426d81782a644711116faa4e1537476b12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_hamilton, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:42:20 np0005544708 podman[261500]: 2025-12-03 21:42:20.010117785 +0000 UTC m=+0.167078777 container start b9864daee99cad508d4098f09b004f426d81782a644711116faa4e1537476b12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_hamilton, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:42:20 np0005544708 podman[261500]: 2025-12-03 21:42:20.014154023 +0000 UTC m=+0.171115035 container attach b9864daee99cad508d4098f09b004f426d81782a644711116faa4e1537476b12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_hamilton, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:42:20 np0005544708 eager_hamilton[261516]: 167 167
Dec  3 16:42:20 np0005544708 systemd[1]: libpod-b9864daee99cad508d4098f09b004f426d81782a644711116faa4e1537476b12.scope: Deactivated successfully.
Dec  3 16:42:20 np0005544708 podman[261500]: 2025-12-03 21:42:20.017369079 +0000 UTC m=+0.174330081 container died b9864daee99cad508d4098f09b004f426d81782a644711116faa4e1537476b12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec  3 16:42:20 np0005544708 systemd[1]: var-lib-containers-storage-overlay-e86f355f7cf54491cbd211459ef4e5f67a455d893f877ee869572a67a511fecc-merged.mount: Deactivated successfully.
Dec  3 16:42:20 np0005544708 podman[261500]: 2025-12-03 21:42:20.06431309 +0000 UTC m=+0.221274062 container remove b9864daee99cad508d4098f09b004f426d81782a644711116faa4e1537476b12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_hamilton, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:42:20 np0005544708 systemd[1]: libpod-conmon-b9864daee99cad508d4098f09b004f426d81782a644711116faa4e1537476b12.scope: Deactivated successfully.
Dec  3 16:42:20 np0005544708 podman[261539]: 2025-12-03 21:42:20.294728786 +0000 UTC m=+0.054887404 container create 06e5764e514414369f6a73774f58b04d368cb3a5a55d0e1cc7ac62ecf0fbd253 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  3 16:42:20 np0005544708 systemd[1]: Started libpod-conmon-06e5764e514414369f6a73774f58b04d368cb3a5a55d0e1cc7ac62ecf0fbd253.scope.
Dec  3 16:42:20 np0005544708 podman[261539]: 2025-12-03 21:42:20.267506426 +0000 UTC m=+0.027665094 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:42:20 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:42:20 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3292546c045abba634357304018129f3c101ebf61141dad611ab6a959a05600/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:42:20 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3292546c045abba634357304018129f3c101ebf61141dad611ab6a959a05600/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:42:20 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3292546c045abba634357304018129f3c101ebf61141dad611ab6a959a05600/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:42:20 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3292546c045abba634357304018129f3c101ebf61141dad611ab6a959a05600/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:42:20 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3292546c045abba634357304018129f3c101ebf61141dad611ab6a959a05600/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:42:20 np0005544708 podman[261539]: 2025-12-03 21:42:20.39392338 +0000 UTC m=+0.154082008 container init 06e5764e514414369f6a73774f58b04d368cb3a5a55d0e1cc7ac62ecf0fbd253 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_rhodes, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:42:20 np0005544708 podman[261539]: 2025-12-03 21:42:20.402963893 +0000 UTC m=+0.163122501 container start 06e5764e514414369f6a73774f58b04d368cb3a5a55d0e1cc7ac62ecf0fbd253 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec  3 16:42:20 np0005544708 podman[261539]: 2025-12-03 21:42:20.406530988 +0000 UTC m=+0.166689636 container attach 06e5764e514414369f6a73774f58b04d368cb3a5a55d0e1cc7ac62ecf0fbd253 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec  3 16:42:20 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1120: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:42:20 np0005544708 cool_rhodes[261556]: --> passed data devices: 0 physical, 3 LVM
Dec  3 16:42:20 np0005544708 cool_rhodes[261556]: --> All data devices are unavailable
Dec  3 16:42:20 np0005544708 systemd[1]: libpod-06e5764e514414369f6a73774f58b04d368cb3a5a55d0e1cc7ac62ecf0fbd253.scope: Deactivated successfully.
Dec  3 16:42:20 np0005544708 podman[261539]: 2025-12-03 21:42:20.983090789 +0000 UTC m=+0.743249437 container died 06e5764e514414369f6a73774f58b04d368cb3a5a55d0e1cc7ac62ecf0fbd253 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_rhodes, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  3 16:42:21 np0005544708 systemd[1]: var-lib-containers-storage-overlay-d3292546c045abba634357304018129f3c101ebf61141dad611ab6a959a05600-merged.mount: Deactivated successfully.
Dec  3 16:42:21 np0005544708 podman[261539]: 2025-12-03 21:42:21.039054511 +0000 UTC m=+0.799213129 container remove 06e5764e514414369f6a73774f58b04d368cb3a5a55d0e1cc7ac62ecf0fbd253 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_rhodes, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  3 16:42:21 np0005544708 systemd[1]: libpod-conmon-06e5764e514414369f6a73774f58b04d368cb3a5a55d0e1cc7ac62ecf0fbd253.scope: Deactivated successfully.
Dec  3 16:42:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Optimize plan auto_2025-12-03_21:42:21
Dec  3 16:42:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 16:42:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] do_upmap
Dec  3 16:42:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'vms', 'images', 'volumes', 'cephfs.cephfs.meta', 'backups']
Dec  3 16:42:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] prepared 0/10 upmap changes
Dec  3 16:42:21 np0005544708 podman[261651]: 2025-12-03 21:42:21.582957464 +0000 UTC m=+0.057522975 container create b9fec94d17a5db13b143dfc1253dda86333fc77dd5ec2d40dfd8d3b1a1a12242 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_elion, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Dec  3 16:42:21 np0005544708 systemd[1]: Started libpod-conmon-b9fec94d17a5db13b143dfc1253dda86333fc77dd5ec2d40dfd8d3b1a1a12242.scope.
Dec  3 16:42:21 np0005544708 podman[261651]: 2025-12-03 21:42:21.565519466 +0000 UTC m=+0.040084977 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:42:21 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:42:21 np0005544708 podman[261651]: 2025-12-03 21:42:21.696573105 +0000 UTC m=+0.171138696 container init b9fec94d17a5db13b143dfc1253dda86333fc77dd5ec2d40dfd8d3b1a1a12242 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_elion, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:42:21 np0005544708 podman[261651]: 2025-12-03 21:42:21.707718724 +0000 UTC m=+0.182284255 container start b9fec94d17a5db13b143dfc1253dda86333fc77dd5ec2d40dfd8d3b1a1a12242 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_elion, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:42:21 np0005544708 podman[261651]: 2025-12-03 21:42:21.711731392 +0000 UTC m=+0.186296983 container attach b9fec94d17a5db13b143dfc1253dda86333fc77dd5ec2d40dfd8d3b1a1a12242 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_elion, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:42:21 np0005544708 sad_elion[261667]: 167 167
Dec  3 16:42:21 np0005544708 systemd[1]: libpod-b9fec94d17a5db13b143dfc1253dda86333fc77dd5ec2d40dfd8d3b1a1a12242.scope: Deactivated successfully.
Dec  3 16:42:21 np0005544708 podman[261651]: 2025-12-03 21:42:21.717694742 +0000 UTC m=+0.192260283 container died b9fec94d17a5db13b143dfc1253dda86333fc77dd5ec2d40dfd8d3b1a1a12242 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_elion, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:42:21 np0005544708 systemd[1]: var-lib-containers-storage-overlay-a9e85a41cafbcde083a0c939a39c641acaa773e3f63dea984bb91bc748942b22-merged.mount: Deactivated successfully.
Dec  3 16:42:21 np0005544708 podman[261651]: 2025-12-03 21:42:21.772702519 +0000 UTC m=+0.247268050 container remove b9fec94d17a5db13b143dfc1253dda86333fc77dd5ec2d40dfd8d3b1a1a12242 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_elion, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:42:21 np0005544708 systemd[1]: libpod-conmon-b9fec94d17a5db13b143dfc1253dda86333fc77dd5ec2d40dfd8d3b1a1a12242.scope: Deactivated successfully.
Dec  3 16:42:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:42:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:42:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:42:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:42:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:42:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:42:21 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:42:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 16:42:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:42:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 16:42:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:42:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:42:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:42:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:42:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:42:21 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:42:22 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:42:22 np0005544708 podman[261692]: 2025-12-03 21:42:22.003934538 +0000 UTC m=+0.065722796 container create 0e3abade3ce6723b772d9007b5f2d525f86c4d807bca0335da6beb82ebf635bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_williams, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:42:22 np0005544708 systemd[1]: Started libpod-conmon-0e3abade3ce6723b772d9007b5f2d525f86c4d807bca0335da6beb82ebf635bf.scope.
Dec  3 16:42:22 np0005544708 podman[261692]: 2025-12-03 21:42:21.971028524 +0000 UTC m=+0.032816842 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:42:22 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:42:22 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f04551b5697548285dcd1ee17f607b96d022f7ff98d6e2d56fd70d60def0b191/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:42:22 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f04551b5697548285dcd1ee17f607b96d022f7ff98d6e2d56fd70d60def0b191/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:42:22 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f04551b5697548285dcd1ee17f607b96d022f7ff98d6e2d56fd70d60def0b191/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:42:22 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f04551b5697548285dcd1ee17f607b96d022f7ff98d6e2d56fd70d60def0b191/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:42:22 np0005544708 podman[261692]: 2025-12-03 21:42:22.103767638 +0000 UTC m=+0.165555876 container init 0e3abade3ce6723b772d9007b5f2d525f86c4d807bca0335da6beb82ebf635bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_williams, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  3 16:42:22 np0005544708 podman[261692]: 2025-12-03 21:42:22.117893877 +0000 UTC m=+0.179682135 container start 0e3abade3ce6723b772d9007b5f2d525f86c4d807bca0335da6beb82ebf635bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_williams, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:42:22 np0005544708 podman[261692]: 2025-12-03 21:42:22.121775982 +0000 UTC m=+0.183564220 container attach 0e3abade3ce6723b772d9007b5f2d525f86c4d807bca0335da6beb82ebf635bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_williams, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]: {
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:    "0": [
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:        {
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:            "devices": [
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "/dev/loop3"
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:            ],
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:            "lv_name": "ceph_lv0",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:            "lv_size": "21470642176",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d33bc95-baf8-481d-bc78-3b15ffd29872,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:            "lv_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:            "name": "ceph_lv0",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:            "tags": {
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.block_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.cluster_name": "ceph",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.crush_device_class": "",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.encrypted": "0",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.objectstore": "bluestore",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.osd_fsid": "4d33bc95-baf8-481d-bc78-3b15ffd29872",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.osd_id": "0",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.type": "block",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.vdo": "0",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.with_tpm": "0"
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:            },
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:            "type": "block",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:            "vg_name": "ceph_vg0"
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:        }
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:    ],
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:    "1": [
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:        {
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:            "devices": [
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "/dev/loop4"
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:            ],
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:            "lv_name": "ceph_lv1",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:            "lv_size": "21470642176",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c4086f1b-ff53-4e63-8dc0-011238d77976,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:            "lv_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:            "name": "ceph_lv1",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:            "tags": {
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.block_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.cluster_name": "ceph",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.crush_device_class": "",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.encrypted": "0",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.objectstore": "bluestore",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.osd_fsid": "c4086f1b-ff53-4e63-8dc0-011238d77976",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.osd_id": "1",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.type": "block",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.vdo": "0",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.with_tpm": "0"
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:            },
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:            "type": "block",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:            "vg_name": "ceph_vg1"
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:        }
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:    ],
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:    "2": [
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:        {
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:            "devices": [
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "/dev/loop5"
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:            ],
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:            "lv_name": "ceph_lv2",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:            "lv_size": "21470642176",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=abcd6a67-9013-4470-978f-f75da5f33cd4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:            "lv_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:            "name": "ceph_lv2",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:            "tags": {
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.block_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.cluster_name": "ceph",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.crush_device_class": "",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.encrypted": "0",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.objectstore": "bluestore",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.osd_fsid": "abcd6a67-9013-4470-978f-f75da5f33cd4",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.osd_id": "2",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.type": "block",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.vdo": "0",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:                "ceph.with_tpm": "0"
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:            },
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:            "type": "block",
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:            "vg_name": "ceph_vg2"
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:        }
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]:    ]
Dec  3 16:42:22 np0005544708 unruffled_williams[261709]: }
Dec  3 16:42:22 np0005544708 systemd[1]: libpod-0e3abade3ce6723b772d9007b5f2d525f86c4d807bca0335da6beb82ebf635bf.scope: Deactivated successfully.
Dec  3 16:42:22 np0005544708 podman[261692]: 2025-12-03 21:42:22.449362487 +0000 UTC m=+0.511150735 container died 0e3abade3ce6723b772d9007b5f2d525f86c4d807bca0335da6beb82ebf635bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:42:22 np0005544708 systemd[1]: var-lib-containers-storage-overlay-f04551b5697548285dcd1ee17f607b96d022f7ff98d6e2d56fd70d60def0b191-merged.mount: Deactivated successfully.
Dec  3 16:42:22 np0005544708 podman[261692]: 2025-12-03 21:42:22.498507186 +0000 UTC m=+0.560295414 container remove 0e3abade3ce6723b772d9007b5f2d525f86c4d807bca0335da6beb82ebf635bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_williams, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Dec  3 16:42:22 np0005544708 systemd[1]: libpod-conmon-0e3abade3ce6723b772d9007b5f2d525f86c4d807bca0335da6beb82ebf635bf.scope: Deactivated successfully.
Dec  3 16:42:22 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1121: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:42:23 np0005544708 podman[261791]: 2025-12-03 21:42:23.09050936 +0000 UTC m=+0.039639135 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:42:23 np0005544708 podman[261791]: 2025-12-03 21:42:23.594433571 +0000 UTC m=+0.543563296 container create 42ed72c759ea3d48d6ce4bfb4209a1b9d03751ec9f6d208f738ec0415ae3840b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_shockley, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec  3 16:42:23 np0005544708 systemd[1]: Started libpod-conmon-42ed72c759ea3d48d6ce4bfb4209a1b9d03751ec9f6d208f738ec0415ae3840b.scope.
Dec  3 16:42:23 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:42:23 np0005544708 podman[261806]: 2025-12-03 21:42:23.880513081 +0000 UTC m=+0.231247509 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  3 16:42:23 np0005544708 podman[261805]: 2025-12-03 21:42:23.886707048 +0000 UTC m=+0.237549379 container health_status 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  3 16:42:24 np0005544708 podman[261791]: 2025-12-03 21:42:24.41120959 +0000 UTC m=+1.360339305 container init 42ed72c759ea3d48d6ce4bfb4209a1b9d03751ec9f6d208f738ec0415ae3840b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True)
Dec  3 16:42:24 np0005544708 podman[261791]: 2025-12-03 21:42:24.418371922 +0000 UTC m=+1.367501647 container start 42ed72c759ea3d48d6ce4bfb4209a1b9d03751ec9f6d208f738ec0415ae3840b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_shockley, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec  3 16:42:24 np0005544708 modest_shockley[261833]: 167 167
Dec  3 16:42:24 np0005544708 systemd[1]: libpod-42ed72c759ea3d48d6ce4bfb4209a1b9d03751ec9f6d208f738ec0415ae3840b.scope: Deactivated successfully.
Dec  3 16:42:24 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1122: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:42:25 np0005544708 podman[261791]: 2025-12-03 21:42:25.096851689 +0000 UTC m=+2.045981424 container attach 42ed72c759ea3d48d6ce4bfb4209a1b9d03751ec9f6d208f738ec0415ae3840b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_shockley, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  3 16:42:25 np0005544708 podman[261791]: 2025-12-03 21:42:25.097367783 +0000 UTC m=+2.046497508 container died 42ed72c759ea3d48d6ce4bfb4209a1b9d03751ec9f6d208f738ec0415ae3840b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:42:25 np0005544708 nova_compute[241566]: 2025-12-03 21:42:25.562 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:42:25 np0005544708 systemd[1]: var-lib-containers-storage-overlay-7023e6cbcd300db409e97a10882b8e3356c4d1563057672ce4e8fd90f2d81f91-merged.mount: Deactivated successfully.
Dec  3 16:42:26 np0005544708 podman[261791]: 2025-12-03 21:42:26.474129047 +0000 UTC m=+3.423258782 container remove 42ed72c759ea3d48d6ce4bfb4209a1b9d03751ec9f6d208f738ec0415ae3840b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  3 16:42:26 np0005544708 systemd[1]: libpod-conmon-42ed72c759ea3d48d6ce4bfb4209a1b9d03751ec9f6d208f738ec0415ae3840b.scope: Deactivated successfully.
Dec  3 16:42:26 np0005544708 nova_compute[241566]: 2025-12-03 21:42:26.551 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:42:26 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1123: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:42:26 np0005544708 podman[261870]: 2025-12-03 21:42:26.670051408 +0000 UTC m=+0.027097558 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:42:27 np0005544708 podman[261870]: 2025-12-03 21:42:27.068490295 +0000 UTC m=+0.425536425 container create 2283f010186107884e92d3673032ee6c54c805e8e407d577ca658aed974b2349 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_morse, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:42:27 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:42:27 np0005544708 nova_compute[241566]: 2025-12-03 21:42:27.551 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:42:27 np0005544708 nova_compute[241566]: 2025-12-03 21:42:27.552 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 16:42:27 np0005544708 nova_compute[241566]: 2025-12-03 21:42:27.552 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 16:42:27 np0005544708 systemd[1]: Started libpod-conmon-2283f010186107884e92d3673032ee6c54c805e8e407d577ca658aed974b2349.scope.
Dec  3 16:42:27 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:42:27 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/824779ca59b45ae6611dadb2ed411ec0f241d00059d9f99b908a8e6ae7b0eaba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:42:27 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/824779ca59b45ae6611dadb2ed411ec0f241d00059d9f99b908a8e6ae7b0eaba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:42:27 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/824779ca59b45ae6611dadb2ed411ec0f241d00059d9f99b908a8e6ae7b0eaba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:42:27 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/824779ca59b45ae6611dadb2ed411ec0f241d00059d9f99b908a8e6ae7b0eaba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:42:27 np0005544708 podman[261870]: 2025-12-03 21:42:27.836076404 +0000 UTC m=+1.193122564 container init 2283f010186107884e92d3673032ee6c54c805e8e407d577ca658aed974b2349 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_morse, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  3 16:42:27 np0005544708 podman[261870]: 2025-12-03 21:42:27.8478121 +0000 UTC m=+1.204858260 container start 2283f010186107884e92d3673032ee6c54c805e8e407d577ca658aed974b2349 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:42:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 16:42:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:42:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 16:42:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:42:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 3.533351853729544e-07 of space, bias 1.0, pg target 0.00010600055561188632 quantized to 32 (current 32)
Dec  3 16:42:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:42:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.833582600959152e-08 of space, bias 1.0, pg target 2.9500747802877454e-05 quantized to 32 (current 32)
Dec  3 16:42:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:42:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:42:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:42:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006678894471709225 of space, bias 1.0, pg target 0.20036683415127676 quantized to 32 (current 32)
Dec  3 16:42:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:42:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.953496000112683e-07 of space, bias 4.0, pg target 0.0009544195200135219 quantized to 16 (current 16)
Dec  3 16:42:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:42:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:42:28 np0005544708 podman[261870]: 2025-12-03 21:42:28.035394007 +0000 UTC m=+1.392440137 container attach 2283f010186107884e92d3673032ee6c54c805e8e407d577ca658aed974b2349 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_morse, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  3 16:42:28 np0005544708 lvm[261968]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:42:28 np0005544708 lvm[261968]: VG ceph_vg2 finished
Dec  3 16:42:28 np0005544708 lvm[261967]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:42:28 np0005544708 lvm[261967]: VG ceph_vg1 finished
Dec  3 16:42:28 np0005544708 lvm[261964]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:42:28 np0005544708 lvm[261964]: VG ceph_vg0 finished
Dec  3 16:42:28 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1124: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:42:28 np0005544708 brave_morse[261887]: {}
Dec  3 16:42:28 np0005544708 systemd[1]: libpod-2283f010186107884e92d3673032ee6c54c805e8e407d577ca658aed974b2349.scope: Deactivated successfully.
Dec  3 16:42:28 np0005544708 systemd[1]: libpod-2283f010186107884e92d3673032ee6c54c805e8e407d577ca658aed974b2349.scope: Consumed 1.376s CPU time.
Dec  3 16:42:28 np0005544708 podman[261870]: 2025-12-03 21:42:28.657223062 +0000 UTC m=+2.014269232 container died 2283f010186107884e92d3673032ee6c54c805e8e407d577ca658aed974b2349 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  3 16:42:29 np0005544708 systemd[1]: var-lib-containers-storage-overlay-824779ca59b45ae6611dadb2ed411ec0f241d00059d9f99b908a8e6ae7b0eaba-merged.mount: Deactivated successfully.
Dec  3 16:42:30 np0005544708 podman[261870]: 2025-12-03 21:42:30.246125544 +0000 UTC m=+3.603171714 container remove 2283f010186107884e92d3673032ee6c54c805e8e407d577ca658aed974b2349 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_morse, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  3 16:42:30 np0005544708 systemd[1]: libpod-conmon-2283f010186107884e92d3673032ee6c54c805e8e407d577ca658aed974b2349.scope: Deactivated successfully.
Dec  3 16:42:30 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:42:30 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1125: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:42:30 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:42:30 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:42:30 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:42:31 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:42:31 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:42:32 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:42:32 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1126: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:42:34 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1127: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:42:36 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1128: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:42:37 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:42:38 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1129: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:42:40 np0005544708 podman[262011]: 2025-12-03 21:42:40.246139784 +0000 UTC m=+0.154642913 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  3 16:42:40 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1130: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:42:42 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:42:42 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1131: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:42:44 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1132: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:42:46 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1133: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:42:48 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1134: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:42:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:42:48.951 151937 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:42:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:42:48.951 151937 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:42:48 np0005544708 ovn_metadata_agent[151932]: 2025-12-03 21:42:48.951 151937 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:42:50 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1135: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:42:50 np0005544708 ceph-mds[93586]: mds.beacon.cephfs.compute-0.gzkqle missed beacon ack from the monitors
Dec  3 16:42:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:42:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:42:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:42:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:42:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:42:51 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:42:52 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1136: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:42:54 np0005544708 podman[262039]: 2025-12-03 21:42:54.157474271 +0000 UTC m=+0.085305041 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  3 16:42:54 np0005544708 podman[262038]: 2025-12-03 21:42:54.162603099 +0000 UTC m=+0.094066696 container health_status 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  3 16:42:54 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1137: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:42:54 np0005544708 ceph-mds[93586]: mds.beacon.cephfs.compute-0.gzkqle missed beacon ack from the monitors
Dec  3 16:42:56 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1138: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:42:58 np0005544708 ceph-mds[93586]: mds.beacon.cephfs.compute-0.gzkqle MDS connection to Monitors appears to be laggy; 15.2094s since last acked beacon
Dec  3 16:42:58 np0005544708 ceph-mds[93586]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Dec  3 16:42:58 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1139: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:42:58 np0005544708 ceph-mds[93586]: mds.beacon.cephfs.compute-0.gzkqle missed beacon ack from the monitors
Dec  3 16:43:00 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1140: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:43:02 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1141: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:43:02 np0005544708 ceph-mds[93586]: mds.beacon.cephfs.compute-0.gzkqle missed beacon ack from the monitors
Dec  3 16:43:03 np0005544708 ceph-mds[93586]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Dec  3 16:43:04 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1142: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:43:05 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).mds e5 check_health: resetting beacon timeouts due to mon delay (slow election?) of 2e+01 seconds
Dec  3 16:43:05 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:43:05 np0005544708 ceph-mds[93586]: mds.beacon.cephfs.compute-0.gzkqle  MDS is no longer laggy
Dec  3 16:43:05 np0005544708 nova_compute[241566]: 2025-12-03 21:43:05.430 241570 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 31.69 sec#033[00m
Dec  3 16:43:05 np0005544708 nova_compute[241566]: 2025-12-03 21:43:05.511 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 16:43:05 np0005544708 nova_compute[241566]: 2025-12-03 21:43:05.512 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:43:05 np0005544708 nova_compute[241566]: 2025-12-03 21:43:05.512 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:43:05 np0005544708 nova_compute[241566]: 2025-12-03 21:43:05.513 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:43:05 np0005544708 nova_compute[241566]: 2025-12-03 21:43:05.513 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:43:05 np0005544708 nova_compute[241566]: 2025-12-03 21:43:05.513 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:43:05 np0005544708 nova_compute[241566]: 2025-12-03 21:43:05.514 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 16:43:05 np0005544708 nova_compute[241566]: 2025-12-03 21:43:05.514 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:43:05 np0005544708 nova_compute[241566]: 2025-12-03 21:43:05.548 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:43:05 np0005544708 nova_compute[241566]: 2025-12-03 21:43:05.549 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:43:05 np0005544708 nova_compute[241566]: 2025-12-03 21:43:05.550 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:43:05 np0005544708 nova_compute[241566]: 2025-12-03 21:43:05.550 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 16:43:05 np0005544708 nova_compute[241566]: 2025-12-03 21:43:05.551 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:43:05 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec  3 16:43:05 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3643027809' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec  3 16:43:05 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec  3 16:43:05 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3643027809' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec  3 16:43:06 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  3 16:43:06 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3008246827' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  3 16:43:06 np0005544708 nova_compute[241566]: 2025-12-03 21:43:06.105 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:43:06 np0005544708 nova_compute[241566]: 2025-12-03 21:43:06.343 241570 WARNING nova.virt.libvirt.driver [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 16:43:06 np0005544708 nova_compute[241566]: 2025-12-03 21:43:06.345 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5156MB free_disk=59.988260054029524GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 16:43:06 np0005544708 nova_compute[241566]: 2025-12-03 21:43:06.345 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:43:06 np0005544708 nova_compute[241566]: 2025-12-03 21:43:06.345 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:43:06 np0005544708 nova_compute[241566]: 2025-12-03 21:43:06.500 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 16:43:06 np0005544708 nova_compute[241566]: 2025-12-03 21:43:06.501 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 16:43:06 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1143: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:43:06 np0005544708 nova_compute[241566]: 2025-12-03 21:43:06.732 241570 DEBUG nova.scheduler.client.report [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Refreshing inventories for resource provider 94aba67c-5c5e-45d0-83d1-33eb467c8775 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  3 16:43:06 np0005544708 nova_compute[241566]: 2025-12-03 21:43:06.784 241570 DEBUG nova.scheduler.client.report [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Updating ProviderTree inventory for provider 94aba67c-5c5e-45d0-83d1-33eb467c8775 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  3 16:43:06 np0005544708 nova_compute[241566]: 2025-12-03 21:43:06.785 241570 DEBUG nova.compute.provider_tree [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Updating inventory in ProviderTree for provider 94aba67c-5c5e-45d0-83d1-33eb467c8775 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  3 16:43:06 np0005544708 nova_compute[241566]: 2025-12-03 21:43:06.799 241570 DEBUG nova.scheduler.client.report [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Refreshing aggregate associations for resource provider 94aba67c-5c5e-45d0-83d1-33eb467c8775, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  3 16:43:06 np0005544708 nova_compute[241566]: 2025-12-03 21:43:06.833 241570 DEBUG nova.scheduler.client.report [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Refreshing trait associations for resource provider 94aba67c-5c5e-45d0-83d1-33eb467c8775, traits: HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AESNI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_ABM,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_FMA3,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  3 16:43:06 np0005544708 nova_compute[241566]: 2025-12-03 21:43:06.848 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:43:07 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  3 16:43:07 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3631726271' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  3 16:43:07 np0005544708 nova_compute[241566]: 2025-12-03 21:43:07.366 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:43:07 np0005544708 nova_compute[241566]: 2025-12-03 21:43:07.374 241570 DEBUG nova.compute.provider_tree [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Inventory has not changed in ProviderTree for provider: 94aba67c-5c5e-45d0-83d1-33eb467c8775 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 16:43:07 np0005544708 nova_compute[241566]: 2025-12-03 21:43:07.406 241570 DEBUG nova.scheduler.client.report [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Inventory has not changed for provider 94aba67c-5c5e-45d0-83d1-33eb467c8775 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 16:43:07 np0005544708 nova_compute[241566]: 2025-12-03 21:43:07.408 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 16:43:07 np0005544708 nova_compute[241566]: 2025-12-03 21:43:07.408 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:43:07 np0005544708 nova_compute[241566]: 2025-12-03 21:43:07.409 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:43:07 np0005544708 nova_compute[241566]: 2025-12-03 21:43:07.409 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  3 16:43:07 np0005544708 nova_compute[241566]: 2025-12-03 21:43:07.430 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  3 16:43:07 np0005544708 nova_compute[241566]: 2025-12-03 21:43:07.431 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:43:07 np0005544708 nova_compute[241566]: 2025-12-03 21:43:07.432 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  3 16:43:08 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1144: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:43:10 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:43:10 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1145: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:43:11 np0005544708 podman[262123]: 2025-12-03 21:43:11.177939446 +0000 UTC m=+0.116141209 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  3 16:43:12 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1146: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:43:14 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1147: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:43:15 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:43:16 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1148: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:43:18 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1149: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:43:20 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:43:20 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1150: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:43:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Optimize plan auto_2025-12-03_21:43:21
Dec  3 16:43:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  3 16:43:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] do_upmap
Dec  3 16:43:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'volumes', 'images', 'vms', 'cephfs.cephfs.data', '.mgr']
Dec  3 16:43:21 np0005544708 ceph-mgr[75500]: [balancer INFO root] prepared 0/10 upmap changes
Dec  3 16:43:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:43:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:43:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:43:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:43:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] scanning for idle connections..
Dec  3 16:43:21 np0005544708 ceph-mgr[75500]: [volumes INFO mgr_util] cleaning up connections: []
Dec  3 16:43:22 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  3 16:43:22 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:43:22 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  3 16:43:22 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  3 16:43:22 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:43:22 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  3 16:43:22 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:43:22 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  3 16:43:22 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:43:22 np0005544708 ceph-mgr[75500]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  3 16:43:22 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1151: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:43:24 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1152: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:43:25 np0005544708 podman[262152]: 2025-12-03 21:43:25.14275306 +0000 UTC m=+0.063928098 container health_status ca3c9ffcca83e311ddd4b17245064c8421e62312b554af833cb488679e04a66c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  3 16:43:25 np0005544708 podman[262151]: 2025-12-03 21:43:25.142720879 +0000 UTC m=+0.075906959 container health_status 2dbb897091576d016ef1909f1fe5c92482cf70641de50d59fe0f3cebe24c4b4c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  3 16:43:25 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:43:26 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1153: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:43:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] _maybe_adjust
Dec  3 16:43:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:43:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  3 16:43:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:43:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 3.533351853729544e-07 of space, bias 1.0, pg target 0.00010600055561188632 quantized to 32 (current 32)
Dec  3 16:43:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:43:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.833582600959152e-08 of space, bias 1.0, pg target 2.9500747802877454e-05 quantized to 32 (current 32)
Dec  3 16:43:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:43:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:43:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:43:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006678894471709225 of space, bias 1.0, pg target 0.20036683415127676 quantized to 32 (current 32)
Dec  3 16:43:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:43:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.953496000112683e-07 of space, bias 4.0, pg target 0.0009544195200135219 quantized to 16 (current 16)
Dec  3 16:43:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  3 16:43:27 np0005544708 ceph-mgr[75500]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  3 16:43:28 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1154: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:43:28 np0005544708 systemd-logind[787]: New session 54 of user zuul.
Dec  3 16:43:28 np0005544708 systemd[1]: Started Session 54 of User zuul.
Dec  3 16:43:30 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:43:30 np0005544708 nova_compute[241566]: 2025-12-03 21:43:30.494 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:43:30 np0005544708 nova_compute[241566]: 2025-12-03 21:43:30.496 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:43:30 np0005544708 nova_compute[241566]: 2025-12-03 21:43:30.496 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  3 16:43:30 np0005544708 nova_compute[241566]: 2025-12-03 21:43:30.496 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  3 16:43:30 np0005544708 nova_compute[241566]: 2025-12-03 21:43:30.534 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  3 16:43:30 np0005544708 nova_compute[241566]: 2025-12-03 21:43:30.534 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:43:30 np0005544708 nova_compute[241566]: 2025-12-03 21:43:30.534 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:43:30 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1155: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:43:31 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.15014 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:43:31 np0005544708 nova_compute[241566]: 2025-12-03 21:43:31.551 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:43:31 np0005544708 nova_compute[241566]: 2025-12-03 21:43:31.552 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:43:31 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:43:31 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:43:31 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  3 16:43:31 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:43:31 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  3 16:43:31 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:43:31 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  3 16:43:31 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec  3 16:43:31 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  3 16:43:31 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:43:31 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:43:31 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:43:32 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.15016 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:43:32 np0005544708 podman[262544]: 2025-12-03 21:43:32.315017669 +0000 UTC m=+0.058087804 container create 01dbbd80f8ac54a7841a221a006ac85ba98cfa0750edc3a5a1fec80ff14f027d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:43:32 np0005544708 systemd[1]: Started libpod-conmon-01dbbd80f8ac54a7841a221a006ac85ba98cfa0750edc3a5a1fec80ff14f027d.scope.
Dec  3 16:43:32 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:43:32 np0005544708 podman[262544]: 2025-12-03 21:43:32.287102727 +0000 UTC m=+0.030172882 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:43:32 np0005544708 podman[262544]: 2025-12-03 21:43:32.399231597 +0000 UTC m=+0.142301742 container init 01dbbd80f8ac54a7841a221a006ac85ba98cfa0750edc3a5a1fec80ff14f027d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec  3 16:43:32 np0005544708 podman[262544]: 2025-12-03 21:43:32.405952858 +0000 UTC m=+0.149023013 container start 01dbbd80f8ac54a7841a221a006ac85ba98cfa0750edc3a5a1fec80ff14f027d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_elgamal, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec  3 16:43:32 np0005544708 podman[262544]: 2025-12-03 21:43:32.409314139 +0000 UTC m=+0.152384294 container attach 01dbbd80f8ac54a7841a221a006ac85ba98cfa0750edc3a5a1fec80ff14f027d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_elgamal, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:43:32 np0005544708 systemd[1]: libpod-01dbbd80f8ac54a7841a221a006ac85ba98cfa0750edc3a5a1fec80ff14f027d.scope: Deactivated successfully.
Dec  3 16:43:32 np0005544708 vibrant_elgamal[262579]: 167 167
Dec  3 16:43:32 np0005544708 conmon[262579]: conmon 01dbbd80f8ac54a7841a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-01dbbd80f8ac54a7841a221a006ac85ba98cfa0750edc3a5a1fec80ff14f027d.scope/container/memory.events
Dec  3 16:43:32 np0005544708 podman[262544]: 2025-12-03 21:43:32.412877444 +0000 UTC m=+0.155947579 container died 01dbbd80f8ac54a7841a221a006ac85ba98cfa0750edc3a5a1fec80ff14f027d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec  3 16:43:32 np0005544708 systemd[1]: var-lib-containers-storage-overlay-cb60def95016433e9c1772bc4e1d907bdae96dc10c60b2fd1ab206d8a3e1f974-merged.mount: Deactivated successfully.
Dec  3 16:43:32 np0005544708 podman[262544]: 2025-12-03 21:43:32.463492727 +0000 UTC m=+0.206562882 container remove 01dbbd80f8ac54a7841a221a006ac85ba98cfa0750edc3a5a1fec80ff14f027d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_elgamal, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:43:32 np0005544708 systemd[1]: libpod-conmon-01dbbd80f8ac54a7841a221a006ac85ba98cfa0750edc3a5a1fec80ff14f027d.scope: Deactivated successfully.
Dec  3 16:43:32 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1156: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:43:32 np0005544708 podman[262603]: 2025-12-03 21:43:32.65077699 +0000 UTC m=+0.045757113 container create 6799f00a58e92e62afa45251e3a23b3cc0d9e99ca2ece2ac38118a0c66fe0906 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_borg, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Dec  3 16:43:32 np0005544708 systemd[1]: Started libpod-conmon-6799f00a58e92e62afa45251e3a23b3cc0d9e99ca2ece2ac38118a0c66fe0906.scope.
Dec  3 16:43:32 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:43:32 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f5e65fa36f75a09804316109c16436691c98463d39d92b2ffd7f280ab83a34e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:43:32 np0005544708 podman[262603]: 2025-12-03 21:43:32.633795673 +0000 UTC m=+0.028775796 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:43:32 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f5e65fa36f75a09804316109c16436691c98463d39d92b2ffd7f280ab83a34e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:43:32 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f5e65fa36f75a09804316109c16436691c98463d39d92b2ffd7f280ab83a34e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:43:32 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f5e65fa36f75a09804316109c16436691c98463d39d92b2ffd7f280ab83a34e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:43:32 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f5e65fa36f75a09804316109c16436691c98463d39d92b2ffd7f280ab83a34e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  3 16:43:32 np0005544708 podman[262603]: 2025-12-03 21:43:32.746120568 +0000 UTC m=+0.141100681 container init 6799f00a58e92e62afa45251e3a23b3cc0d9e99ca2ece2ac38118a0c66fe0906 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_borg, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec  3 16:43:32 np0005544708 podman[262603]: 2025-12-03 21:43:32.757306049 +0000 UTC m=+0.152286172 container start 6799f00a58e92e62afa45251e3a23b3cc0d9e99ca2ece2ac38118a0c66fe0906 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_borg, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:43:32 np0005544708 podman[262603]: 2025-12-03 21:43:32.761892063 +0000 UTC m=+0.156872176 container attach 6799f00a58e92e62afa45251e3a23b3cc0d9e99ca2ece2ac38118a0c66fe0906 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_borg, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:43:32 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Dec  3 16:43:32 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/680629300' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec  3 16:43:32 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec  3 16:43:32 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:43:32 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec  3 16:43:33 np0005544708 thirsty_borg[262620]: --> passed data devices: 0 physical, 3 LVM
Dec  3 16:43:33 np0005544708 thirsty_borg[262620]: --> All data devices are unavailable
Dec  3 16:43:33 np0005544708 systemd[1]: libpod-6799f00a58e92e62afa45251e3a23b3cc0d9e99ca2ece2ac38118a0c66fe0906.scope: Deactivated successfully.
Dec  3 16:43:33 np0005544708 podman[262603]: 2025-12-03 21:43:33.33593948 +0000 UTC m=+0.730919573 container died 6799f00a58e92e62afa45251e3a23b3cc0d9e99ca2ece2ac38118a0c66fe0906 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_borg, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:43:33 np0005544708 systemd[1]: var-lib-containers-storage-overlay-2f5e65fa36f75a09804316109c16436691c98463d39d92b2ffd7f280ab83a34e-merged.mount: Deactivated successfully.
Dec  3 16:43:33 np0005544708 podman[262603]: 2025-12-03 21:43:33.383659695 +0000 UTC m=+0.778639788 container remove 6799f00a58e92e62afa45251e3a23b3cc0d9e99ca2ece2ac38118a0c66fe0906 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_borg, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Dec  3 16:43:33 np0005544708 systemd[1]: libpod-conmon-6799f00a58e92e62afa45251e3a23b3cc0d9e99ca2ece2ac38118a0c66fe0906.scope: Deactivated successfully.
Dec  3 16:43:33 np0005544708 nova_compute[241566]: 2025-12-03 21:43:33.551 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:43:33 np0005544708 podman[262721]: 2025-12-03 21:43:33.986445317 +0000 UTC m=+0.060034979 container create 4a09349cd5ccce9a22ba446c0b43f79c9a83844efec17d07d508b48033106b61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_blackwell, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  3 16:43:34 np0005544708 systemd[1]: Started libpod-conmon-4a09349cd5ccce9a22ba446c0b43f79c9a83844efec17d07d508b48033106b61.scope.
Dec  3 16:43:34 np0005544708 podman[262721]: 2025-12-03 21:43:33.964741662 +0000 UTC m=+0.038331334 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:43:34 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:43:34 np0005544708 podman[262721]: 2025-12-03 21:43:34.090853868 +0000 UTC m=+0.164443530 container init 4a09349cd5ccce9a22ba446c0b43f79c9a83844efec17d07d508b48033106b61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Dec  3 16:43:34 np0005544708 podman[262721]: 2025-12-03 21:43:34.101720591 +0000 UTC m=+0.175310253 container start 4a09349cd5ccce9a22ba446c0b43f79c9a83844efec17d07d508b48033106b61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_blackwell, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  3 16:43:34 np0005544708 podman[262721]: 2025-12-03 21:43:34.106036107 +0000 UTC m=+0.179625779 container attach 4a09349cd5ccce9a22ba446c0b43f79c9a83844efec17d07d508b48033106b61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  3 16:43:34 np0005544708 gallant_blackwell[262737]: 167 167
Dec  3 16:43:34 np0005544708 podman[262721]: 2025-12-03 21:43:34.109012197 +0000 UTC m=+0.182601879 container died 4a09349cd5ccce9a22ba446c0b43f79c9a83844efec17d07d508b48033106b61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_blackwell, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 16:43:34 np0005544708 systemd[1]: libpod-4a09349cd5ccce9a22ba446c0b43f79c9a83844efec17d07d508b48033106b61.scope: Deactivated successfully.
Dec  3 16:43:34 np0005544708 systemd[1]: var-lib-containers-storage-overlay-fd7f5f0b3070a2cdbee76722a3c3f455ced840f90b2ba85fdff9aaef9cb063b5-merged.mount: Deactivated successfully.
Dec  3 16:43:34 np0005544708 podman[262721]: 2025-12-03 21:43:34.153523866 +0000 UTC m=+0.227113528 container remove 4a09349cd5ccce9a22ba446c0b43f79c9a83844efec17d07d508b48033106b61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_blackwell, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  3 16:43:34 np0005544708 systemd[1]: libpod-conmon-4a09349cd5ccce9a22ba446c0b43f79c9a83844efec17d07d508b48033106b61.scope: Deactivated successfully.
Dec  3 16:43:34 np0005544708 podman[262760]: 2025-12-03 21:43:34.362097532 +0000 UTC m=+0.052418373 container create 59779ec8140b78d4036e77376986a5da67d09c3492dc7fb5486a589100bc8b9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_taussig, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  3 16:43:34 np0005544708 systemd[1]: Started libpod-conmon-59779ec8140b78d4036e77376986a5da67d09c3492dc7fb5486a589100bc8b9f.scope.
Dec  3 16:43:34 np0005544708 podman[262760]: 2025-12-03 21:43:34.337308474 +0000 UTC m=+0.027629395 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:43:34 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:43:34 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7395a1003d6b6026a709cbe8b0f77fc3f08645e5fc7a66773b377e9c7e5a4b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:43:34 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7395a1003d6b6026a709cbe8b0f77fc3f08645e5fc7a66773b377e9c7e5a4b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:43:34 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7395a1003d6b6026a709cbe8b0f77fc3f08645e5fc7a66773b377e9c7e5a4b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:43:34 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7395a1003d6b6026a709cbe8b0f77fc3f08645e5fc7a66773b377e9c7e5a4b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:43:34 np0005544708 podman[262760]: 2025-12-03 21:43:34.480106339 +0000 UTC m=+0.170427270 container init 59779ec8140b78d4036e77376986a5da67d09c3492dc7fb5486a589100bc8b9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_taussig, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  3 16:43:34 np0005544708 podman[262760]: 2025-12-03 21:43:34.486843301 +0000 UTC m=+0.177164162 container start 59779ec8140b78d4036e77376986a5da67d09c3492dc7fb5486a589100bc8b9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec  3 16:43:34 np0005544708 podman[262760]: 2025-12-03 21:43:34.48975949 +0000 UTC m=+0.180080341 container attach 59779ec8140b78d4036e77376986a5da67d09c3492dc7fb5486a589100bc8b9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_taussig, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:43:34 np0005544708 nova_compute[241566]: 2025-12-03 21:43:34.552 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:43:34 np0005544708 nova_compute[241566]: 2025-12-03 21:43:34.554 241570 DEBUG nova.compute.manager [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  3 16:43:34 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1157: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:43:34 np0005544708 nice_taussig[262781]: {
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:    "0": [
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:        {
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:            "devices": [
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "/dev/loop3"
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:            ],
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:            "lv_name": "ceph_lv0",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:            "lv_size": "21470642176",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d33bc95-baf8-481d-bc78-3b15ffd29872,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:            "lv_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:            "name": "ceph_lv0",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:            "tags": {
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.block_uuid": "PwNUS4-BHZk-ELZr-3583-tr9Z-qLm9-dy1sXV",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.cluster_name": "ceph",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.crush_device_class": "",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.encrypted": "0",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.objectstore": "bluestore",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.osd_fsid": "4d33bc95-baf8-481d-bc78-3b15ffd29872",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.osd_id": "0",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.type": "block",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.vdo": "0",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.with_tpm": "0"
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:            },
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:            "type": "block",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:            "vg_name": "ceph_vg0"
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:        }
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:    ],
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:    "1": [
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:        {
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:            "devices": [
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "/dev/loop4"
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:            ],
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:            "lv_name": "ceph_lv1",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:            "lv_size": "21470642176",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=c4086f1b-ff53-4e63-8dc0-011238d77976,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:            "lv_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:            "name": "ceph_lv1",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:            "tags": {
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.block_uuid": "T7ogio-vnKG-UnmK-tQf4-EOV3-bgiF-2ulcyV",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.cluster_name": "ceph",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.crush_device_class": "",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.encrypted": "0",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.objectstore": "bluestore",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.osd_fsid": "c4086f1b-ff53-4e63-8dc0-011238d77976",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.osd_id": "1",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.type": "block",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.vdo": "0",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.with_tpm": "0"
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:            },
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:            "type": "block",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:            "vg_name": "ceph_vg1"
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:        }
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:    ],
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:    "2": [
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:        {
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:            "devices": [
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "/dev/loop5"
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:            ],
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:            "lv_name": "ceph_lv2",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:            "lv_size": "21470642176",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c21de27e-a7fd-594b-8324-0697ba9aab3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=abcd6a67-9013-4470-978f-f75da5f33cd4,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:            "lv_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:            "name": "ceph_lv2",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:            "tags": {
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.block_uuid": "ShGq8u-AIcR-gjG5-ke7J-C2kz-D2Cd-22IOFY",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.cephx_lockbox_secret": "",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.cluster_fsid": "c21de27e-a7fd-594b-8324-0697ba9aab3a",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.cluster_name": "ceph",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.crush_device_class": "",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.encrypted": "0",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.objectstore": "bluestore",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.osd_fsid": "abcd6a67-9013-4470-978f-f75da5f33cd4",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.osd_id": "2",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.type": "block",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.vdo": "0",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:                "ceph.with_tpm": "0"
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:            },
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:            "type": "block",
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:            "vg_name": "ceph_vg2"
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:        }
Dec  3 16:43:34 np0005544708 nice_taussig[262781]:    ]
Dec  3 16:43:34 np0005544708 nice_taussig[262781]: }
Dec  3 16:43:34 np0005544708 systemd[1]: libpod-59779ec8140b78d4036e77376986a5da67d09c3492dc7fb5486a589100bc8b9f.scope: Deactivated successfully.
Dec  3 16:43:34 np0005544708 podman[262760]: 2025-12-03 21:43:34.824723259 +0000 UTC m=+0.515044110 container died 59779ec8140b78d4036e77376986a5da67d09c3492dc7fb5486a589100bc8b9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:43:34 np0005544708 systemd[1]: var-lib-containers-storage-overlay-f7395a1003d6b6026a709cbe8b0f77fc3f08645e5fc7a66773b377e9c7e5a4b1-merged.mount: Deactivated successfully.
Dec  3 16:43:34 np0005544708 podman[262760]: 2025-12-03 21:43:34.865810466 +0000 UTC m=+0.556131297 container remove 59779ec8140b78d4036e77376986a5da67d09c3492dc7fb5486a589100bc8b9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_taussig, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  3 16:43:34 np0005544708 systemd[1]: libpod-conmon-59779ec8140b78d4036e77376986a5da67d09c3492dc7fb5486a589100bc8b9f.scope: Deactivated successfully.
Dec  3 16:43:35 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:43:35 np0005544708 podman[262874]: 2025-12-03 21:43:35.39072026 +0000 UTC m=+0.049511424 container create bba68ce820ec2cc0f2f2d45aa4e80ce740a5c6c0a25f2459db34826d6dc6f21f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_jackson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec  3 16:43:35 np0005544708 systemd[1]: Started libpod-conmon-bba68ce820ec2cc0f2f2d45aa4e80ce740a5c6c0a25f2459db34826d6dc6f21f.scope.
Dec  3 16:43:35 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:43:35 np0005544708 podman[262874]: 2025-12-03 21:43:35.371984115 +0000 UTC m=+0.030775259 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:43:35 np0005544708 podman[262874]: 2025-12-03 21:43:35.480101587 +0000 UTC m=+0.138892751 container init bba68ce820ec2cc0f2f2d45aa4e80ce740a5c6c0a25f2459db34826d6dc6f21f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_jackson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:43:35 np0005544708 podman[262874]: 2025-12-03 21:43:35.491697649 +0000 UTC m=+0.150488783 container start bba68ce820ec2cc0f2f2d45aa4e80ce740a5c6c0a25f2459db34826d6dc6f21f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_jackson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  3 16:43:35 np0005544708 podman[262874]: 2025-12-03 21:43:35.494842434 +0000 UTC m=+0.153633568 container attach bba68ce820ec2cc0f2f2d45aa4e80ce740a5c6c0a25f2459db34826d6dc6f21f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  3 16:43:35 np0005544708 busy_jackson[262899]: 167 167
Dec  3 16:43:35 np0005544708 systemd[1]: libpod-bba68ce820ec2cc0f2f2d45aa4e80ce740a5c6c0a25f2459db34826d6dc6f21f.scope: Deactivated successfully.
Dec  3 16:43:35 np0005544708 podman[262874]: 2025-12-03 21:43:35.500525206 +0000 UTC m=+0.159316370 container died bba68ce820ec2cc0f2f2d45aa4e80ce740a5c6c0a25f2459db34826d6dc6f21f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec  3 16:43:35 np0005544708 systemd[1]: var-lib-containers-storage-overlay-bad2ef6951d78e139d0e3b290d97ac1138d6fb83dbdf61a07047500a4da5e981-merged.mount: Deactivated successfully.
Dec  3 16:43:35 np0005544708 podman[262874]: 2025-12-03 21:43:35.550891253 +0000 UTC m=+0.209682387 container remove bba68ce820ec2cc0f2f2d45aa4e80ce740a5c6c0a25f2459db34826d6dc6f21f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_jackson, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  3 16:43:35 np0005544708 nova_compute[241566]: 2025-12-03 21:43:35.552 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:43:35 np0005544708 ovs-vsctl[262920]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec  3 16:43:35 np0005544708 systemd[1]: libpod-conmon-bba68ce820ec2cc0f2f2d45aa4e80ce740a5c6c0a25f2459db34826d6dc6f21f.scope: Deactivated successfully.
Dec  3 16:43:35 np0005544708 nova_compute[241566]: 2025-12-03 21:43:35.585 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:43:35 np0005544708 nova_compute[241566]: 2025-12-03 21:43:35.586 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:43:35 np0005544708 nova_compute[241566]: 2025-12-03 21:43:35.586 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:43:35 np0005544708 nova_compute[241566]: 2025-12-03 21:43:35.587 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  3 16:43:35 np0005544708 nova_compute[241566]: 2025-12-03 21:43:35.587 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:43:35 np0005544708 podman[262962]: 2025-12-03 21:43:35.790055113 +0000 UTC m=+0.059861663 container create 7b401a3d65fc0cb9e44889a2d37a92f1d21a80d4eb7f0d1f23329050d33ca714 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  3 16:43:35 np0005544708 systemd[1]: Started libpod-conmon-7b401a3d65fc0cb9e44889a2d37a92f1d21a80d4eb7f0d1f23329050d33ca714.scope.
Dec  3 16:43:35 np0005544708 podman[262962]: 2025-12-03 21:43:35.765266356 +0000 UTC m=+0.035072986 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec  3 16:43:35 np0005544708 systemd[1]: Started libcrun container.
Dec  3 16:43:35 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2310d563a1dfab11160840e22124782f875634f2b6767445fe8ad4fa0372419/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  3 16:43:35 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2310d563a1dfab11160840e22124782f875634f2b6767445fe8ad4fa0372419/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  3 16:43:35 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2310d563a1dfab11160840e22124782f875634f2b6767445fe8ad4fa0372419/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  3 16:43:35 np0005544708 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2310d563a1dfab11160840e22124782f875634f2b6767445fe8ad4fa0372419/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  3 16:43:35 np0005544708 podman[262962]: 2025-12-03 21:43:35.89093566 +0000 UTC m=+0.160742230 container init 7b401a3d65fc0cb9e44889a2d37a92f1d21a80d4eb7f0d1f23329050d33ca714 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_nightingale, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  3 16:43:35 np0005544708 podman[262962]: 2025-12-03 21:43:35.902772788 +0000 UTC m=+0.172579328 container start 7b401a3d65fc0cb9e44889a2d37a92f1d21a80d4eb7f0d1f23329050d33ca714 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_nightingale, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  3 16:43:35 np0005544708 podman[262962]: 2025-12-03 21:43:35.905839291 +0000 UTC m=+0.175645861 container attach 7b401a3d65fc0cb9e44889a2d37a92f1d21a80d4eb7f0d1f23329050d33ca714 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_nightingale, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  3 16:43:36 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  3 16:43:36 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/504657527' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  3 16:43:36 np0005544708 nova_compute[241566]: 2025-12-03 21:43:36.116 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:43:36 np0005544708 nova_compute[241566]: 2025-12-03 21:43:36.267 241570 WARNING nova.virt.libvirt.driver [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  3 16:43:36 np0005544708 nova_compute[241566]: 2025-12-03 21:43:36.269 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4995MB free_disk=59.988260054029524GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  3 16:43:36 np0005544708 nova_compute[241566]: 2025-12-03 21:43:36.269 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  3 16:43:36 np0005544708 nova_compute[241566]: 2025-12-03 21:43:36.269 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  3 16:43:36 np0005544708 nova_compute[241566]: 2025-12-03 21:43:36.336 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  3 16:43:36 np0005544708 nova_compute[241566]: 2025-12-03 21:43:36.337 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  3 16:43:36 np0005544708 nova_compute[241566]: 2025-12-03 21:43:36.350 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  3 16:43:36 np0005544708 virtqemud[241184]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec  3 16:43:36 np0005544708 virtqemud[241184]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec  3 16:43:36 np0005544708 virtqemud[241184]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec  3 16:43:36 np0005544708 lvm[263204]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:43:36 np0005544708 lvm[263204]: VG ceph_vg0 finished
Dec  3 16:43:36 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1158: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:43:36 np0005544708 lvm[263208]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:43:36 np0005544708 lvm[263208]: VG ceph_vg1 finished
Dec  3 16:43:36 np0005544708 lvm[263213]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:43:36 np0005544708 lvm[263213]: VG ceph_vg2 finished
Dec  3 16:43:36 np0005544708 lvm[263215]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:43:36 np0005544708 lvm[263215]: VG ceph_vg0 finished
Dec  3 16:43:36 np0005544708 eloquent_nightingale[263007]: {}
Dec  3 16:43:36 np0005544708 systemd[1]: libpod-7b401a3d65fc0cb9e44889a2d37a92f1d21a80d4eb7f0d1f23329050d33ca714.scope: Deactivated successfully.
Dec  3 16:43:36 np0005544708 systemd[1]: libpod-7b401a3d65fc0cb9e44889a2d37a92f1d21a80d4eb7f0d1f23329050d33ca714.scope: Consumed 1.322s CPU time.
Dec  3 16:43:36 np0005544708 conmon[263007]: conmon 7b401a3d65fc0cb9e448 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7b401a3d65fc0cb9e44889a2d37a92f1d21a80d4eb7f0d1f23329050d33ca714.scope/container/memory.events
Dec  3 16:43:36 np0005544708 podman[262962]: 2025-12-03 21:43:36.76256282 +0000 UTC m=+1.032369380 container died 7b401a3d65fc0cb9e44889a2d37a92f1d21a80d4eb7f0d1f23329050d33ca714 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_nightingale, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  3 16:43:36 np0005544708 systemd[1]: var-lib-containers-storage-overlay-a2310d563a1dfab11160840e22124782f875634f2b6767445fe8ad4fa0372419-merged.mount: Deactivated successfully.
Dec  3 16:43:36 np0005544708 podman[262962]: 2025-12-03 21:43:36.817660774 +0000 UTC m=+1.087467314 container remove 7b401a3d65fc0cb9e44889a2d37a92f1d21a80d4eb7f0d1f23329050d33ca714 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_nightingale, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  3 16:43:36 np0005544708 systemd[1]: libpod-conmon-7b401a3d65fc0cb9e44889a2d37a92f1d21a80d4eb7f0d1f23329050d33ca714.scope: Deactivated successfully.
Dec  3 16:43:36 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec  3 16:43:36 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3255983988' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec  3 16:43:36 np0005544708 nova_compute[241566]: 2025-12-03 21:43:36.875 241570 DEBUG oslo_concurrency.processutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  3 16:43:36 np0005544708 nova_compute[241566]: 2025-12-03 21:43:36.884 241570 DEBUG nova.compute.provider_tree [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Inventory has not changed in ProviderTree for provider: 94aba67c-5c5e-45d0-83d1-33eb467c8775 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  3 16:43:36 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  3 16:43:36 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:43:36 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  3 16:43:36 np0005544708 ceph-mon[75204]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:43:36 np0005544708 nova_compute[241566]: 2025-12-03 21:43:36.916 241570 DEBUG nova.scheduler.client.report [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Inventory has not changed for provider 94aba67c-5c5e-45d0-83d1-33eb467c8775 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  3 16:43:36 np0005544708 nova_compute[241566]: 2025-12-03 21:43:36.918 241570 DEBUG nova.compute.resource_tracker [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  3 16:43:36 np0005544708 nova_compute[241566]: 2025-12-03 21:43:36.918 241570 DEBUG oslo_concurrency.lockutils [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  3 16:43:37 np0005544708 ceph-mds[93586]: mds.cephfs.compute-0.gzkqle asok_command: cache status {prefix=cache status} (starting...)
Dec  3 16:43:37 np0005544708 ceph-mds[93586]: mds.cephfs.compute-0.gzkqle asok_command: client ls {prefix=client ls} (starting...)
Dec  3 16:43:37 np0005544708 lvm[263444]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  3 16:43:37 np0005544708 lvm[263444]: VG ceph_vg0 finished
Dec  3 16:43:37 np0005544708 lvm[263452]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  3 16:43:37 np0005544708 lvm[263452]: VG ceph_vg2 finished
Dec  3 16:43:37 np0005544708 lvm[263484]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  3 16:43:37 np0005544708 lvm[263484]: VG ceph_vg1 finished
Dec  3 16:43:37 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.15024 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:43:37 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:43:37 np0005544708 ceph-mon[75204]: from='mgr.14122 192.168.122.100:0/2726886298' entity='mgr.compute-0.jxauqt' 
Dec  3 16:43:37 np0005544708 ceph-mds[93586]: mds.cephfs.compute-0.gzkqle asok_command: damage ls {prefix=damage ls} (starting...)
Dec  3 16:43:37 np0005544708 nova_compute[241566]: 2025-12-03 21:43:37.912 241570 DEBUG oslo_service.periodic_task [None req-71bc7e37-c187-47ed-86bc-0ddfb5097d38 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  3 16:43:38 np0005544708 ceph-mds[93586]: mds.cephfs.compute-0.gzkqle asok_command: dump loads {prefix=dump loads} (starting...)
Dec  3 16:43:38 np0005544708 ceph-mds[93586]: mds.cephfs.compute-0.gzkqle asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec  3 16:43:38 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.15026 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:43:38 np0005544708 ceph-mds[93586]: mds.cephfs.compute-0.gzkqle asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec  3 16:43:38 np0005544708 ceph-mds[93586]: mds.cephfs.compute-0.gzkqle asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec  3 16:43:38 np0005544708 ceph-mds[93586]: mds.cephfs.compute-0.gzkqle asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec  3 16:43:38 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.15028 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:43:38 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1159: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:43:38 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Dec  3 16:43:38 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/382969193' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec  3 16:43:38 np0005544708 ceph-mds[93586]: mds.cephfs.compute-0.gzkqle asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec  3 16:43:38 np0005544708 ceph-mds[93586]: mds.cephfs.compute-0.gzkqle asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec  3 16:43:39 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.15032 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:43:39 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mgr-compute-0-jxauqt[75496]: 2025-12-03T21:43:39.095+0000 7fa8c63a3640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  3 16:43:39 np0005544708 ceph-mgr[75500]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  3 16:43:39 np0005544708 ceph-mds[93586]: mds.cephfs.compute-0.gzkqle asok_command: ops {prefix=ops} (starting...)
Dec  3 16:43:39 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  3 16:43:39 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/784569290' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec  3 16:43:39 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Dec  3 16:43:39 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3601736007' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Dec  3 16:43:39 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Dec  3 16:43:39 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/620582623' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Dec  3 16:43:39 np0005544708 ceph-mds[93586]: mds.cephfs.compute-0.gzkqle asok_command: session ls {prefix=session ls} (starting...)
Dec  3 16:43:39 np0005544708 ceph-mds[93586]: mds.cephfs.compute-0.gzkqle asok_command: status {prefix=status} (starting...)
Dec  3 16:43:40 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:43:40 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec  3 16:43:40 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2292369582' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec  3 16:43:40 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Dec  3 16:43:40 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1133625465' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Dec  3 16:43:40 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1160: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:43:40 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec  3 16:43:40 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1088593252' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec  3 16:43:40 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.15046 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:43:41 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec  3 16:43:41 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1843679869' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec  3 16:43:41 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.15050 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:43:41 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  3 16:43:41 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3593633270' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec  3 16:43:41 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Dec  3 16:43:41 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2892314225' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec  3 16:43:42 np0005544708 podman[264012]: 2025-12-03 21:43:42.158431957 +0000 UTC m=+0.094593279 container health_status eea7b0f6b73da2db5c525e534cccce9e79c080e649bef7a358a1c660ff7b788b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  3 16:43:42 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec  3 16:43:42 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4279544096' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec  3 16:43:42 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Dec  3 16:43:42 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1744587186' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Dec  3 16:43:42 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1161: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:43:42 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.15062 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:43:42 np0005544708 ceph-mgr[75500]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  3 16:43:42 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mgr-compute-0-jxauqt[75496]: 2025-12-03T21:43:42.767+0000 7fa8c63a3640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  3 16:43:42 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec  3 16:43:42 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2889469172' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec  3 16:43:43 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Dec  3 16:43:43 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3284278167' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Dec  3 16:43:43 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.15066 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62783488 unmapped: 122880 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 114688 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 425895 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 114688 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 114688 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62799872 unmapped: 106496 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62799872 unmapped: 106496 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62808064 unmapped: 98304 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 428308 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 4.e scrub starts
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 4.e scrub ok
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62808064 unmapped: 98304 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62816256 unmapped: 90112 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.105876923s of 10.125753403s, submitted: 10
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 81920 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 81920 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62840832 unmapped: 65536 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 437958 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62840832 unmapped: 65536 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62840832 unmapped: 65536 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 32768 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 32768 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62881792 unmapped: 24576 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 440371 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62889984 unmapped: 16384 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62898176 unmapped: 8192 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62906368 unmapped: 0 heap: 62906368 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.043985367s of 11.063570023s, submitted: 6
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 1040384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62922752 unmapped: 1032192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 442782 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.a scrub starts
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.a scrub ok
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62922752 unmapped: 1032192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62922752 unmapped: 1032192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 1024000 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62939136 unmapped: 1015808 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.e scrub starts
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.e scrub ok
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62947328 unmapped: 1007616 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 452430 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62955520 unmapped: 999424 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62963712 unmapped: 991232 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62971904 unmapped: 983040 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 62980096 unmapped: 974848 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.994940758s of 11.102183342s, submitted: 12
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 942080 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 457252 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 942080 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 933888 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 933888 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 925696 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 925696 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 462074 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63045632 unmapped: 909312 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63053824 unmapped: 901120 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63053824 unmapped: 901120 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63062016 unmapped: 892928 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63070208 unmapped: 884736 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 464485 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63070208 unmapped: 884736 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.c scrub starts
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.941810608s of 11.959441185s, submitted: 8
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.c scrub ok
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63078400 unmapped: 876544 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63094784 unmapped: 860160 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 843776 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 835584 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 471722 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 835584 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63127552 unmapped: 827392 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 819200 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 811008 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 794624 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 474135 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 794624 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.e scrub starts
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.860826492s of 10.089083672s, submitted: 10
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 7.e scrub ok
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63168512 unmapped: 786432 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63168512 unmapped: 786432 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 778240 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 778240 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 481368 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 778240 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 6.f scrub starts
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: log_channel(cluster) log [DBG] : 6.f scrub ok
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 770048 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 753664 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 745472 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 745472 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 745472 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 737280 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 737280 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 729088 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 720896 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63242240 unmapped: 712704 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63242240 unmapped: 712704 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63242240 unmapped: 712704 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 704512 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 704512 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63275008 unmapped: 679936 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 671744 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63291392 unmapped: 663552 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63291392 unmapped: 663552 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63291392 unmapped: 663552 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 655360 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 647168 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 647168 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63315968 unmapped: 638976 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63315968 unmapped: 638976 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63315968 unmapped: 638976 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 630784 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 630784 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63332352 unmapped: 622592 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63332352 unmapped: 622592 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63332352 unmapped: 622592 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63340544 unmapped: 614400 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63348736 unmapped: 606208 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 598016 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 598016 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63373312 unmapped: 581632 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63381504 unmapped: 573440 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 565248 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 565248 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63397888 unmapped: 557056 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 548864 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 548864 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 532480 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 532480 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 524288 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 524288 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 516096 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 516096 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 516096 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63447040 unmapped: 507904 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63447040 unmapped: 507904 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 491520 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 491520 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 491520 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63471616 unmapped: 483328 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63471616 unmapped: 483328 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 475136 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 466944 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 466944 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 458752 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 458752 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 458752 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 450560 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 450560 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 442368 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 434176 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 434176 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 425984 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 425984 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 425984 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 409600 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 409600 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63553536 unmapped: 401408 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63553536 unmapped: 401408 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 393216 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63578112 unmapped: 376832 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63578112 unmapped: 376832 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 368640 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 368640 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 360448 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 360448 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 360448 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 352256 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 352256 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 352256 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 327680 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 327680 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 319488 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 319488 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 319488 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 303104 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 303104 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 303104 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 294912 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 294912 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 270336 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 270336 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 262144 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 262144 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 262144 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 253952 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 253952 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 245760 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63725568 unmapped: 229376 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63725568 unmapped: 229376 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63733760 unmapped: 221184 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63733760 unmapped: 221184 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63733760 unmapped: 221184 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63741952 unmapped: 212992 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63741952 unmapped: 212992 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63733760 unmapped: 221184 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63741952 unmapped: 212992 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63741952 unmapped: 212992 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63750144 unmapped: 204800 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63750144 unmapped: 204800 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63766528 unmapped: 188416 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63774720 unmapped: 180224 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63791104 unmapped: 163840 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63799296 unmapped: 155648 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63799296 unmapped: 155648 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 147456 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 147456 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 147456 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 139264 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 139264 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 131072 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 122880 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 122880 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 114688 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 114688 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 106496 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 106496 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 106496 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 98304 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 98304 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 98304 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63864832 unmapped: 90112 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63864832 unmapped: 90112 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 81920 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 81920 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 81920 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63881216 unmapped: 73728 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63881216 unmapped: 73728 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63881216 unmapped: 73728 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 65536 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 65536 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 57344 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 57344 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 57344 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63905792 unmapped: 49152 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 40960 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 32768 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 32768 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 24576 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 24576 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 24576 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 16384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 16384 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 8192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 8192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 8192 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 0 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 0 heap: 63954944 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 1040384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 1040384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 1032192 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 1032192 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 1032192 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 1024000 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 1024000 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 1024000 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1007616 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1007616 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 999424 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 999424 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 999424 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 991232 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 991232 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 983040 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 983040 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 983040 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 974848 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 974848 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 966656 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 966656 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 966656 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 958464 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 958464 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 958464 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 950272 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 950272 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 942080 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 942080 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 933888 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 933888 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 942080 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 933888 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 933888 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 933888 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64077824 unmapped: 925696 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64077824 unmapped: 925696 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 917504 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 917504 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 909312 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 909312 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 901120 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 901120 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 901120 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 892928 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 892928 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 892928 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 884736 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 884736 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 876544 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 876544 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 876544 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 868352 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 868352 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 860160 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 860160 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 860160 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 851968 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 851968 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 843776 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 843776 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 843776 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 835584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 835584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 835584 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 827392 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 827392 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 819200 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 819200 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 819200 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64192512 unmapped: 811008 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64192512 unmapped: 811008 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64192512 unmapped: 811008 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64200704 unmapped: 802816 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64200704 unmapped: 802816 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 794624 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64208896 unmapped: 794624 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64217088 unmapped: 786432 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64217088 unmapped: 786432 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64217088 unmapped: 786432 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 778240 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64225280 unmapped: 778240 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64233472 unmapped: 770048 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64233472 unmapped: 770048 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64241664 unmapped: 761856 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64241664 unmapped: 761856 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64241664 unmapped: 761856 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64249856 unmapped: 753664 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64249856 unmapped: 753664 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64249856 unmapped: 753664 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64258048 unmapped: 745472 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64258048 unmapped: 745472 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 737280 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 737280 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 737280 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 729088 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 729088 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 729088 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 720896 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 720896 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64290816 unmapped: 712704 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 704512 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64307200 unmapped: 696320 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64315392 unmapped: 688128 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 679936 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 679936 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 679936 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 671744 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 671744 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 663552 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 663552 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 663552 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64348160 unmapped: 655360 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64348160 unmapped: 655360 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64348160 unmapped: 655360 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 647168 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 647168 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64364544 unmapped: 638976 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64364544 unmapped: 638976 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64364544 unmapped: 638976 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 630784 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 630784 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 630784 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64380928 unmapped: 622592 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64380928 unmapped: 622592 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64389120 unmapped: 614400 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64389120 unmapped: 614400 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64397312 unmapped: 606208 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64397312 unmapped: 606208 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64397312 unmapped: 606208 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 598016 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 598016 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 598016 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64413696 unmapped: 589824 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64413696 unmapped: 589824 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64413696 unmapped: 589824 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64421888 unmapped: 581632 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64421888 unmapped: 581632 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64430080 unmapped: 573440 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64430080 unmapped: 573440 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64430080 unmapped: 573440 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64438272 unmapped: 565248 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64438272 unmapped: 565248 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64446464 unmapped: 557056 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64446464 unmapped: 557056 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64454656 unmapped: 548864 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64454656 unmapped: 548864 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64454656 unmapped: 548864 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64462848 unmapped: 540672 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64471040 unmapped: 532480 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64471040 unmapped: 532480 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64479232 unmapped: 524288 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64479232 unmapped: 524288 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64479232 unmapped: 524288 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64487424 unmapped: 516096 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64487424 unmapped: 516096 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64495616 unmapped: 507904 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64503808 unmapped: 499712 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64512000 unmapped: 491520 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64512000 unmapped: 491520 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64512000 unmapped: 491520 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64520192 unmapped: 483328 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64520192 unmapped: 483328 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64528384 unmapped: 475136 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64528384 unmapped: 475136 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64528384 unmapped: 475136 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64536576 unmapped: 466944 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64536576 unmapped: 466944 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64536576 unmapped: 466944 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64544768 unmapped: 458752 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64552960 unmapped: 450560 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64552960 unmapped: 450560 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64552960 unmapped: 450560 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64561152 unmapped: 442368 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64561152 unmapped: 442368 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64569344 unmapped: 434176 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64569344 unmapped: 434176 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64569344 unmapped: 434176 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64577536 unmapped: 425984 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64577536 unmapped: 425984 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64585728 unmapped: 417792 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64585728 unmapped: 417792 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64585728 unmapped: 417792 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64593920 unmapped: 409600 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64593920 unmapped: 409600 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64602112 unmapped: 401408 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64602112 unmapped: 401408 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64602112 unmapped: 401408 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64610304 unmapped: 393216 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64610304 unmapped: 393216 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64618496 unmapped: 385024 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64618496 unmapped: 385024 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64618496 unmapped: 385024 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64626688 unmapped: 376832 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64626688 unmapped: 376832 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64634880 unmapped: 368640 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64634880 unmapped: 368640 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64634880 unmapped: 368640 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64643072 unmapped: 360448 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64643072 unmapped: 360448 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64643072 unmapped: 360448 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64651264 unmapped: 352256 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64651264 unmapped: 352256 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 344064 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 335872 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64667648 unmapped: 335872 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 327680 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 327680 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 327680 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64684032 unmapped: 319488 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64684032 unmapped: 319488 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64684032 unmapped: 319488 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 311296 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 311296 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 311296 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 311296 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64700416 unmapped: 303104 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64700416 unmapped: 303104 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64700416 unmapped: 303104 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64708608 unmapped: 294912 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64708608 unmapped: 294912 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64716800 unmapped: 286720 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64716800 unmapped: 286720 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64716800 unmapped: 286720 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64724992 unmapped: 278528 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64724992 unmapped: 278528 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64724992 unmapped: 278528 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64733184 unmapped: 270336 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64733184 unmapped: 270336 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64733184 unmapped: 270336 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 262144 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 262144 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 4150 writes, 19K keys, 4150 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 4150 writes, 366 syncs, 11.34 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4150 writes, 19K keys, 4150 commit groups, 1.0 writes per commit group, ingest: 16.19 MB, 0.03 MB/s#012Interval WAL: 4150 writes, 366 syncs, 11.34 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559f074b98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559f074b98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64806912 unmapped: 196608 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64815104 unmapped: 188416 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64823296 unmapped: 180224 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 172032 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64831488 unmapped: 172032 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 163840 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 163840 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64839680 unmapped: 163840 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 155648 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 155648 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64847872 unmapped: 155648 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 147456 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64856064 unmapped: 147456 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 139264 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 139264 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64864256 unmapped: 139264 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 131072 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 131072 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 122880 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 122880 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64880640 unmapped: 122880 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 114688 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64888832 unmapped: 114688 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64897024 unmapped: 106496 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64897024 unmapped: 106496 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64897024 unmapped: 106496 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64905216 unmapped: 98304 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64905216 unmapped: 98304 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64921600 unmapped: 81920 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64921600 unmapped: 81920 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64921600 unmapped: 81920 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64929792 unmapped: 73728 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64929792 unmapped: 73728 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64937984 unmapped: 65536 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64937984 unmapped: 65536 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64937984 unmapped: 65536 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64946176 unmapped: 57344 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64946176 unmapped: 57344 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64946176 unmapped: 57344 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64954368 unmapped: 49152 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64954368 unmapped: 49152 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64962560 unmapped: 40960 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64962560 unmapped: 40960 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64970752 unmapped: 32768 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64970752 unmapped: 32768 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64970752 unmapped: 32768 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64978944 unmapped: 24576 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64978944 unmapped: 24576 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64987136 unmapped: 16384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64987136 unmapped: 16384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64987136 unmapped: 16384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 8192 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 64995328 unmapped: 8192 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65003520 unmapped: 0 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65003520 unmapped: 0 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65003520 unmapped: 0 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65011712 unmapped: 1040384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65011712 unmapped: 1040384 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65019904 unmapped: 1032192 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 1024000 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65028096 unmapped: 1024000 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65036288 unmapped: 1015808 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65036288 unmapped: 1015808 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65036288 unmapped: 1015808 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65044480 unmapped: 1007616 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65044480 unmapped: 1007616 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65044480 unmapped: 1007616 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65052672 unmapped: 999424 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65060864 unmapped: 991232 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65060864 unmapped: 991232 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65060864 unmapped: 991232 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65069056 unmapped: 983040 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65069056 unmapped: 983040 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65069056 unmapped: 983040 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65077248 unmapped: 974848 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65085440 unmapped: 966656 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65085440 unmapped: 966656 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65085440 unmapped: 966656 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65093632 unmapped: 958464 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65093632 unmapped: 958464 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65101824 unmapped: 950272 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65101824 unmapped: 950272 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65101824 unmapped: 950272 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65110016 unmapped: 942080 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65110016 unmapped: 942080 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65110016 unmapped: 942080 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65118208 unmapped: 933888 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65118208 unmapped: 933888 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65118208 unmapped: 933888 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65118208 unmapped: 933888 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65118208 unmapped: 933888 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65126400 unmapped: 925696 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65126400 unmapped: 925696 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 917504 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 917504 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65134592 unmapped: 917504 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65142784 unmapped: 909312 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65150976 unmapped: 901120 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 892928 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65159168 unmapped: 892928 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65175552 unmapped: 876544 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65167360 unmapped: 884736 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65175552 unmapped: 876544 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65175552 unmapped: 876544 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65175552 unmapped: 876544 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65183744 unmapped: 868352 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65191936 unmapped: 860160 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65200128 unmapped: 851968 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65200128 unmapped: 851968 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65200128 unmapped: 851968 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65200128 unmapped: 851968 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65200128 unmapped: 851968 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65200128 unmapped: 851968 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65200128 unmapped: 851968 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65200128 unmapped: 851968 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65200128 unmapped: 851968 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65200128 unmapped: 851968 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65208320 unmapped: 843776 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65216512 unmapped: 835584 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65224704 unmapped: 827392 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 819200 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65241088 unmapped: 811008 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65249280 unmapped: 802816 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65257472 unmapped: 794624 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65257472 unmapped: 794624 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65257472 unmapped: 794624 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65257472 unmapped: 794624 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65257472 unmapped: 794624 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65257472 unmapped: 794624 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65257472 unmapped: 794624 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65257472 unmapped: 794624 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65257472 unmapped: 794624 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65257472 unmapped: 794624 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65257472 unmapped: 794624 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65257472 unmapped: 794624 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65257472 unmapped: 794624 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65257472 unmapped: 794624 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65265664 unmapped: 786432 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65282048 unmapped: 770048 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 761856 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 753664 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65306624 unmapped: 745472 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65314816 unmapped: 737280 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 4150 writes, 19K keys, 4150 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4150 writes, 366 syncs, 11.34 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559f074b98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559f074b98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, 
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65347584 unmapped: 704512 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65347584 unmapped: 704512 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65355776 unmapped: 696320 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65363968 unmapped: 688128 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65363968 unmapped: 688128 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65363968 unmapped: 688128 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65363968 unmapped: 688128 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65363968 unmapped: 688128 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65363968 unmapped: 688128 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65363968 unmapped: 688128 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x47a11/0xac000, compress 0x0/0x0/0x0, omap 0x7a19, meta 0x1a285e7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 483779 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65372160 unmapped: 679936 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 62 handle_osd_map epochs [62,63], i have 62, src has [1,63]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 1103.114379883s of 1103.126831055s, submitted: 6
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65503232 unmapped: 548864 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 63 handle_osd_map epochs [64,64], i have 63, src has [1,64]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65617920 unmapped: 17219584 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 64 handle_osd_map epochs [64,65], i have 64, src has [1,65]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 65 ms_handle_reset con 0x559f08fd3c00 session 0x559f09d6b500
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65699840 unmapped: 17137664 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 65 heartbeat osd_stat(store_statfs(0x4fd914000/0x0/0x4ffc00000, data 0x84bbf8/0x8b6000, compress 0x0/0x0/0x0, omap 0x85f8, meta 0x1a27a08), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65699840 unmapped: 17137664 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 539076 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65830912 unmapped: 17006592 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 65 handle_osd_map epochs [65,66], i have 65, src has [1,66]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 66 ms_handle_reset con 0x559f09d9a800 session 0x559f0b0ea380
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 16982016 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 16982016 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 16982016 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 16982016 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 66 heartbeat osd_stat(store_statfs(0x4fd910000/0x0/0x4ffc00000, data 0x84d204/0x8ba000, compress 0x0/0x0/0x0, omap 0x8924, meta 0x1a276dc), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 543588 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 16982016 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 16982016 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 66 heartbeat osd_stat(store_statfs(0x4fd910000/0x0/0x4ffc00000, data 0x84d204/0x8ba000, compress 0x0/0x0/0x0, omap 0x8924, meta 0x1a276dc), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 16982016 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 16982016 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 16982016 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 543588 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 16982016 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65855488 unmapped: 16982016 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 66 handle_osd_map epochs [67,67], i have 66, src has [1,67]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.887508392s of 16.273990631s, submitted: 31
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 67 heartbeat osd_stat(store_statfs(0x4fd90d000/0x0/0x4ffc00000, data 0x84e6b4/0x8bd000, compress 0x0/0x0/0x0, omap 0x8bf3, meta 0x1a2740d), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 546216 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 67 heartbeat osd_stat(store_statfs(0x4fd90d000/0x0/0x4ffc00000, data 0x84e6b4/0x8bd000, compress 0x0/0x0/0x0, omap 0x8bf3, meta 0x1a2740d), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 67 heartbeat osd_stat(store_statfs(0x4fd90d000/0x0/0x4ffc00000, data 0x84e6b4/0x8bd000, compress 0x0/0x0/0x0, omap 0x8bf3, meta 0x1a2740d), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 546216 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 67 heartbeat osd_stat(store_statfs(0x4fd90d000/0x0/0x4ffc00000, data 0x84e6b4/0x8bd000, compress 0x0/0x0/0x0, omap 0x8bf3, meta 0x1a2740d), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 546216 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 67 heartbeat osd_stat(store_statfs(0x4fd90d000/0x0/0x4ffc00000, data 0x84e6b4/0x8bd000, compress 0x0/0x0/0x0, omap 0x8bf3, meta 0x1a2740d), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 67 heartbeat osd_stat(store_statfs(0x4fd90d000/0x0/0x4ffc00000, data 0x84e6b4/0x8bd000, compress 0x0/0x0/0x0, omap 0x8bf3, meta 0x1a2740d), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 67 heartbeat osd_stat(store_statfs(0x4fd90d000/0x0/0x4ffc00000, data 0x84e6b4/0x8bd000, compress 0x0/0x0/0x0, omap 0x8bf3, meta 0x1a2740d), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 546216 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 67 heartbeat osd_stat(store_statfs(0x4fd90d000/0x0/0x4ffc00000, data 0x84e6b4/0x8bd000, compress 0x0/0x0/0x0, omap 0x8bf3, meta 0x1a2740d), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 546216 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 67 heartbeat osd_stat(store_statfs(0x4fd90d000/0x0/0x4ffc00000, data 0x84e6b4/0x8bd000, compress 0x0/0x0/0x0, omap 0x8bf3, meta 0x1a2740d), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 67 heartbeat osd_stat(store_statfs(0x4fd90d000/0x0/0x4ffc00000, data 0x84e6b4/0x8bd000, compress 0x0/0x0/0x0, omap 0x8bf3, meta 0x1a2740d), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 65871872 unmapped: 16965632 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 26.484739304s of 26.491054535s, submitted: 13
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 66101248 unmapped: 16736256 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 67 handle_osd_map epochs [68,68], i have 67, src has [1,68]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 68 ms_handle_reset con 0x559f0b2db800 session 0x559f0b1041c0
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 16687104 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 552517 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 66150400 unmapped: 16687104 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd908000/0x0/0x4ffc00000, data 0x8500a4/0x8c2000, compress 0x0/0x0/0x0, omap 0x8f21, meta 0x1a270df), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 16588800 heap: 82837504 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 24756224 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 24715264 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 68 handle_osd_map epochs [69,69], i have 68, src has [1,69]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 69 ms_handle_reset con 0x559f0b2db400 session 0x559f0b09f6c0
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 69 heartbeat osd_stat(store_statfs(0x4fc10a000/0x0/0x4ffc00000, data 0x20500a4/0x20c2000, compress 0x0/0x0/0x0, omap 0x9267, meta 0x1a26d99), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 24690688 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 69 handle_osd_map epochs [70,70], i have 69, src has [1,70]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 70 ms_handle_reset con 0x559f0b2dbc00 session 0x559f09d6a540
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 563102 data_alloc: 218103808 data_used: 666
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 23683072 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 70 heartbeat osd_stat(store_statfs(0x4fc105000/0x0/0x4ffc00000, data 0x2051671/0x20c5000, compress 0x0/0x0/0x0, omap 0x95e7, meta 0x1a26a19), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 70 handle_osd_map epochs [70,71], i have 70, src has [1,71]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 71 ms_handle_reset con 0x559f08fd3c00 session 0x559f09ce7180
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 71 ms_handle_reset con 0x559f0b2db400 session 0x559f0b0b1dc0
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 71 heartbeat osd_stat(store_statfs(0x4fd8fe000/0x0/0x4ffc00000, data 0x852cc5/0x8ca000, compress 0x0/0x0/0x0, omap 0x9dc1, meta 0x1a2623f), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 23642112 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 71 ms_handle_reset con 0x559f0b2db800 session 0x559f0b0c9c00
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 71 ms_handle_reset con 0x559f09dcbc00 session 0x559f0b0c8380
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 23379968 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 71 heartbeat osd_stat(store_statfs(0x4fd8fe000/0x0/0x4ffc00000, data 0x853ec3/0x8cb000, compress 0x0/0x0/0x0, omap 0xa06d, meta 0x1a25f93), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 23240704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 71 handle_osd_map epochs [72,72], i have 71, src has [1,72]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.315886497s of 10.602708817s, submitted: 111
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 72 ms_handle_reset con 0x559f09dcb800 session 0x559f0b0c8fc0
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 72 ms_handle_reset con 0x559f08fd3c00 session 0x559f0af396c0
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 72 ms_handle_reset con 0x559f09dcbc00 session 0x559f0b079a40
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 23543808 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 572170 data_alloc: 218103808 data_used: 4743
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 23289856 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 72 handle_osd_map epochs [72,73], i have 72, src has [1,73]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 73 ms_handle_reset con 0x559f08567c00 session 0x559f0b078c40
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 73 ms_handle_reset con 0x559f08566c00 session 0x559f09cc2700
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 68403200 unmapped: 22831104 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 73 handle_osd_map epochs [73,74], i have 73, src has [1,74]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 74 ms_handle_reset con 0x559f0af3bc00 session 0x559f09cc3a40
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 74 ms_handle_reset con 0x559f08566400 session 0x559f0b05cc40
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 74 handle_osd_map epochs [75,75], i have 74, src has [1,75]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 22675456 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 22675456 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 75 heartbeat osd_stat(store_statfs(0x4fd8f1000/0x0/0x4ffc00000, data 0x8595b7/0x8d5000, compress 0x0/0x0/0x0, omap 0xb3a3, meta 0x1a24c5d), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 22675456 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 75 heartbeat osd_stat(store_statfs(0x4fd8f1000/0x0/0x4ffc00000, data 0x8595b7/0x8d5000, compress 0x0/0x0/0x0, omap 0xb3a3, meta 0x1a24c5d), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 580126 data_alloc: 218103808 data_used: 4727
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 22642688 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 68403200 unmapped: 22831104 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 75 ms_handle_reset con 0x559f0af3bc00 session 0x559f0977ea80
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 75 handle_osd_map epochs [76,76], i have 75, src has [1,76]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 68403200 unmapped: 22831104 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 68460544 unmapped: 22773760 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 76 handle_osd_map epochs [77,77], i have 76, src has [1,77]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.737900734s of 10.061096191s, submitted: 136
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 77 ms_handle_reset con 0x559f08566c00 session 0x559f0977f880
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 22659072 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 77 handle_osd_map epochs [77,78], i have 77, src has [1,78]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 78 ms_handle_reset con 0x559f09dcbc00 session 0x559f0b0c9340
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 78 heartbeat osd_stat(store_statfs(0x4fd8eb000/0x0/0x4ffc00000, data 0x85d114/0x8df000, compress 0x0/0x0/0x0, omap 0xc01d, meta 0x1a23fe3), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 602228 data_alloc: 218103808 data_used: 12849
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 78 heartbeat osd_stat(store_statfs(0x4fd8eb000/0x0/0x4ffc00000, data 0x85d114/0x8df000, compress 0x0/0x0/0x0, omap 0xc01d, meta 0x1a23fe3), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 68624384 unmapped: 22609920 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 78 handle_osd_map epochs [79,79], i have 78, src has [1,79]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 79 ms_handle_reset con 0x559f0b2dbc00 session 0x559f09ce7180
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 22757376 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 79 handle_osd_map epochs [80,80], i have 79, src has [1,80]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 80 ms_handle_reset con 0x559f09dcbc00 session 0x559f0b0c9880
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 69566464 unmapped: 21667840 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 81 ms_handle_reset con 0x559f08567c00 session 0x559f0977ee00
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 81 handle_osd_map epochs [82,82], i have 81, src has [1,82]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 21610496 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 21610496 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 620604 data_alloc: 218103808 data_used: 12849
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 21610496 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 82 heartbeat osd_stat(store_statfs(0x4fd8d1000/0x0/0x4ffc00000, data 0x86555b/0x8f3000, compress 0x0/0x0/0x0, omap 0xd14b, meta 0x1a22eb5), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 21610496 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 82 handle_osd_map epochs [82,83], i have 82, src has [1,83]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 83 ms_handle_reset con 0x559f08566c00 session 0x559f09cc2c40
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 70721536 unmapped: 20512768 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 83 handle_osd_map epochs [84,84], i have 83, src has [1,84]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 84 ms_handle_reset con 0x559f08566400 session 0x559f09d6bdc0
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 70746112 unmapped: 20488192 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 84 handle_osd_map epochs [84,85], i have 84, src has [1,85]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.950626373s of 10.085215569s, submitted: 81
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 85 ms_handle_reset con 0x559f0af3bc00 session 0x559f0af39500
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 19333120 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 85 handle_osd_map epochs [85,86], i have 85, src has [1,86]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 86 ms_handle_reset con 0x559f08566400 session 0x559f0af39c00
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 625457 data_alloc: 218103808 data_used: 12849
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 19038208 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 86 handle_osd_map epochs [87,87], i have 86, src has [1,87]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 87 ms_handle_reset con 0x559f08566c00 session 0x559f0af388c0
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 87 heartbeat osd_stat(store_statfs(0x4fd8d1000/0x0/0x4ffc00000, data 0x8695c0/0x8f9000, compress 0x0/0x0/0x0, omap 0xe12c, meta 0x1a21ed4), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72351744 unmapped: 18882560 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 87 handle_osd_map epochs [87,88], i have 87, src has [1,88]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 88 ms_handle_reset con 0x559f08567c00 session 0x559f09cc3180
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 18857984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 18808832 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 88 heartbeat osd_stat(store_statfs(0x4fd8ca000/0x0/0x4ffc00000, data 0x86b287/0x8fc000, compress 0x0/0x0/0x0, omap 0xeb23, meta 0x1a214dd), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 18808832 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 629504 data_alloc: 218103808 data_used: 12849
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 18808832 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 18808832 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 88 heartbeat osd_stat(store_statfs(0x4fd8ca000/0x0/0x4ffc00000, data 0x86b287/0x8fc000, compress 0x0/0x0/0x0, omap 0xeb23, meta 0x1a214dd), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72450048 unmapped: 18784256 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72450048 unmapped: 18784256 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 18767872 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 631348 data_alloc: 218103808 data_used: 12849
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 18767872 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.084028244s of 12.300132751s, submitted: 126
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72531968 unmapped: 18702336 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 89 handle_osd_map epochs [90,90], i have 89, src has [1,90]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 90 ms_handle_reset con 0x559f09dcbc00 session 0x559f0b05c380
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 18694144 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 90 heartbeat osd_stat(store_statfs(0x4fc726000/0x0/0x4ffc00000, data 0x86e168/0x904000, compress 0x0/0x0/0x0, omap 0xf537, meta 0x2bc0ac9), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 18694144 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 18694144 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 638855 data_alloc: 218103808 data_used: 12849
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 18694144 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 18694144 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 90 ms_handle_reset con 0x559f0b2db800 session 0x559f09cc3c00
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 90 heartbeat osd_stat(store_statfs(0x4fc726000/0x0/0x4ffc00000, data 0x86e168/0x904000, compress 0x0/0x0/0x0, omap 0xf537, meta 0x2bc0ac9), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 18604032 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 91 handle_osd_map epochs [91,92], i have 91, src has [1,92]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 92 ms_handle_reset con 0x559f08566c00 session 0x559f0af38000
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 92 ms_handle_reset con 0x559f08566400 session 0x559f090328c0
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 92 ms_handle_reset con 0x559f08567c00 session 0x559f0977e700
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 18505728 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 18505728 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 646548 data_alloc: 218103808 data_used: 12865
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 18505728 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 18505728 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 92 heartbeat osd_stat(store_statfs(0x4fc71e000/0x0/0x4ffc00000, data 0x870d56/0x90a000, compress 0x0/0x0/0x0, omap 0xfe38, meta 0x2bc01c8), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 18505728 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 18505728 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 18505728 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 646548 data_alloc: 218103808 data_used: 12865
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 18505728 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 18505728 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 92 ms_handle_reset con 0x559f09dcbc00 session 0x559f0b09f880
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 92 handle_osd_map epochs [92,93], i have 92, src has [1,93]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.700133324s of 15.770095825s, submitted: 49
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fc71e000/0x0/0x4ffc00000, data 0x870d56/0x90a000, compress 0x0/0x0/0x0, omap 0xfe38, meta 0x2bc01c8), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 93 ms_handle_reset con 0x559f0b2db400 session 0x559f0b05d180
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 18374656 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 18374656 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 93 ms_handle_reset con 0x559f08566400 session 0x559f0b05ddc0
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 93 handle_osd_map epochs [94,94], i have 93, src has [1,94]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 18341888 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 95 ms_handle_reset con 0x559f08566c00 session 0x559f0b05da40
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 660485 data_alloc: 218103808 data_used: 12865
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 95 heartbeat osd_stat(store_statfs(0x4fc717000/0x0/0x4ffc00000, data 0x8737db/0x911000, compress 0x0/0x0/0x0, omap 0x10399, meta 0x2bbfc67), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 17973248 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 17973248 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 17973248 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 95 heartbeat osd_stat(store_statfs(0x4fc6f1000/0x0/0x4ffc00000, data 0x898e2a/0x939000, compress 0x0/0x0/0x0, omap 0x10594, meta 0x2bbfa6c), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 17874944 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 95 handle_osd_map epochs [95,96], i have 95, src has [1,96]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 96 ms_handle_reset con 0x559f0af1d800 session 0x559f09032fc0
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 17891328 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 96 handle_osd_map epochs [96,97], i have 96, src has [1,97]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 97 ms_handle_reset con 0x559f0af1d400 session 0x559f0af38700
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 666817 data_alloc: 218103808 data_used: 19521
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 97 heartbeat osd_stat(store_statfs(0x4fc6ee000/0x0/0x4ffc00000, data 0x89a413/0x93c000, compress 0x0/0x0/0x0, omap 0x10a4b, meta 0x2bbf5b5), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 16842752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 97 ms_handle_reset con 0x559f0ae9fc00 session 0x559f0b05c540
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 97 ms_handle_reset con 0x559f08566400 session 0x559f0b078700
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 16678912 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 97 handle_osd_map epochs [98,98], i have 97, src has [1,98]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.905013084s of 10.014015198s, submitted: 64
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 98 ms_handle_reset con 0x559f08566c00 session 0x559f099688c0
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 74588160 unmapped: 16646144 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 98 handle_osd_map epochs [99,99], i have 98, src has [1,99]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 99 ms_handle_reset con 0x559f0af1d400 session 0x559f0aa6cc40
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 16629760 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 100 ms_handle_reset con 0x559f0af1d800 session 0x559f0977fc00
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 16605184 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 100 heartbeat osd_stat(store_statfs(0x4fc6e2000/0x0/0x4ffc00000, data 0x89fcaa/0x948000, compress 0x0/0x0/0x0, omap 0x11e47, meta 0x2bbe1b9), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 677227 data_alloc: 218103808 data_used: 19505
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 16605184 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 16605184 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 16605184 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 100 ms_handle_reset con 0x559f0ae9f000 session 0x559f0b05d500
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 100 ms_handle_reset con 0x559f08566c00 session 0x559f0b05c8c0
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 100 ms_handle_reset con 0x559f08566400 session 0x559f0b0c9180
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 100 ms_handle_reset con 0x559f0af1d400 session 0x559f0b0d8a80
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 100 ms_handle_reset con 0x559f0af1d800 session 0x559f0af39500
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 100 ms_handle_reset con 0x559f08568400 session 0x559f09d6bc00
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 100 ms_handle_reset con 0x559f08566400 session 0x559f0b079dc0
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 16408576 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 100 heartbeat osd_stat(store_statfs(0x4fc6e2000/0x0/0x4ffc00000, data 0x89fcaa/0x948000, compress 0x0/0x0/0x0, omap 0x11f49, meta 0x2bbe0b7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 16408576 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 100 ms_handle_reset con 0x559f08566c00 session 0x559f0b0eb6c0
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 678372 data_alloc: 218103808 data_used: 20137
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 100 heartbeat osd_stat(store_statfs(0x4fc6e2000/0x0/0x4ffc00000, data 0x89fcaa/0x948000, compress 0x0/0x0/0x0, omap 0x11f49, meta 0x2bbe0b7), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 16392192 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 16392192 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 100 handle_osd_map epochs [100,101], i have 100, src has [1,101]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.915824890s of 10.005084038s, submitted: 54
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 16392192 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 16457728 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 101 ms_handle_reset con 0x559f0b2dbc00 session 0x559f08c38540
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 102 ms_handle_reset con 0x559f0b2db400 session 0x559f090328c0
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 102 ms_handle_reset con 0x559f0b2ea400 session 0x559f0977ea80
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 102 ms_handle_reset con 0x559f0b2ea000 session 0x559f0aa6ddc0
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 102 ms_handle_reset con 0x559f08566400 session 0x559f0b05da40
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 16064512 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 102 ms_handle_reset con 0x559f08566c00 session 0x559f0b0c96c0
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 102 handle_osd_map epochs [103,103], i have 102, src has [1,103]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 689739 data_alloc: 218103808 data_used: 24268
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 103 heartbeat osd_stat(store_statfs(0x4fc6db000/0x0/0x4ffc00000, data 0x8a2782/0x94f000, compress 0x0/0x0/0x0, omap 0x12951, meta 0x2bbd6af), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 16023552 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 104 ms_handle_reset con 0x559f0b2db400 session 0x559f0b083180
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 16015360 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 104 ms_handle_reset con 0x559f0b2dbc00 session 0x559f09cc21c0
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 104 ms_handle_reset con 0x559f0b2dbc00 session 0x559f0aa6d6c0
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 104 ms_handle_reset con 0x559f0b2ea800 session 0x559f0b0eaa80
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 15998976 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 104 ms_handle_reset con 0x559f0b2eac00 session 0x559f0af38000
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 14901248 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 105 ms_handle_reset con 0x559f0b2eb000 session 0x559f0b082fc0
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 14868480 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 694003 data_alloc: 218103808 data_used: 24268
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 105 ms_handle_reset con 0x559f0af1d400 session 0x559f0b0c9340
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 105 ms_handle_reset con 0x559f0af1d800 session 0x559f0977f340
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 14868480 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 105 heartbeat osd_stat(store_statfs(0x4fc6d2000/0x0/0x4ffc00000, data 0x8a69c9/0x958000, compress 0x0/0x0/0x0, omap 0x1384e, meta 0x2bbc7b2), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 106 ms_handle_reset con 0x559f0b2dbc00 session 0x559f09cc2000
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 14860288 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 14819328 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.597695351s of 11.822847366s, submitted: 164
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 14819328 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 107 heartbeat osd_stat(store_statfs(0x4fc6ce000/0x0/0x4ffc00000, data 0x8a948c/0x95c000, compress 0x0/0x0/0x0, omap 0x13f68, meta 0x2bbc098), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 107 ms_handle_reset con 0x559f08567c00 session 0x559f09032e00
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 107 ms_handle_reset con 0x559f09dcbc00 session 0x559f0b078e00
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 14819328 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 107 ms_handle_reset con 0x559f0b2ea800 session 0x559f09033180
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 695237 data_alloc: 218103808 data_used: 19252
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 14909440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 107 heartbeat osd_stat(store_statfs(0x4fc6f4000/0x0/0x4ffc00000, data 0x885469/0x937000, compress 0x0/0x0/0x0, omap 0x13f68, meta 0x2bbc098), peers [0,1] op hist [0,0,0,1])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 107 ms_handle_reset con 0x559f08567c00 session 0x559f090328c0
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 107 handle_osd_map epochs [107,108], i have 107, src has [1,108]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 14909440 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 108 ms_handle_reset con 0x559f09dcbc00 session 0x559f08c38700
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 108 heartbeat osd_stat(store_statfs(0x4fc6ef000/0x0/0x4ffc00000, data 0x886ad2/0x93a000, compress 0x0/0x0/0x0, omap 0x145c3, meta 0x2bbba3d), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 108 heartbeat osd_stat(store_statfs(0x4fc6ef000/0x0/0x4ffc00000, data 0x886ad2/0x93a000, compress 0x0/0x0/0x0, omap 0x145c3, meta 0x2bbba3d), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 698135 data_alloc: 218103808 data_used: 19252
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 108 heartbeat osd_stat(store_statfs(0x4fc6ef000/0x0/0x4ffc00000, data 0x886ad2/0x93a000, compress 0x0/0x0/0x0, omap 0x145c3, meta 0x2bbba3d), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.643769264s of 10.903412819s, submitted: 68
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6ef000/0x0/0x4ffc00000, data 0x886ad2/0x93a000, compress 0x0/0x0/0x0, omap 0x145c3, meta 0x2bbba3d), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 700845 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 109 heartbeat osd_stat(store_statfs(0x4fc6ed000/0x0/0x4ffc00000, data 0x887f9e/0x93d000, compress 0x0/0x0/0x0, omap 0x14858, meta 0x2bbb7a8), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 109 handle_osd_map epochs [110,110], i have 109, src has [1,110]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread fragmentation_score=0.000147 took=0.000030s
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 14893056 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 14753792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: do_command 'config diff' '{prefix=config diff}'
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: do_command 'config show' '{prefix=config show}'
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: do_command 'counter dump' '{prefix=counter dump}'
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: do_command 'counter schema' '{prefix=counter schema}'
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 14147584 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76890112 unmapped: 14344192 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 76947456 unmapped: 14286848 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: do_command 'log dump' '{prefix=log dump}'
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: do_command 'perf dump' '{prefix=perf dump}'
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 14057472 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: do_command 'perf schema' '{prefix=perf schema}'
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 13926400 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 14057472 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 14057472 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 14057472 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 14057472 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 14057472 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 14057472 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 14057472 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 14057472 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 14057472 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 14057472 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 14057472 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 14057472 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 14057472 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 14057472 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 14057472 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 14057472 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 14057472 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 14057472 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 14057472 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 14057472 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 14057472 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 14057472 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 14049280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 14049280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 14049280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 14049280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 14049280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 14049280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 14049280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 14049280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 14049280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 14049280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 14049280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 14049280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 14049280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 14049280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 14049280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 14049280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 14049280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 14049280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 14049280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 14049280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 14049280 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 14041088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 14041088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 14041088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 14041088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 14041088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 14041088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 14041088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1857107739' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 14041088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 14041088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 14041088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 14041088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 14041088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 14041088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 14041088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 14041088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 14041088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 14041088 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 14032896 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 14032896 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 14032896 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 14032896 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 14032896 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 14032896 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 14032896 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 14032896 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 14032896 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 14032896 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 14032896 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 14032896 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 14032896 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 14032896 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 14032896 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 14024704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 14024704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 14024704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 14024704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 14024704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 14024704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 14024704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 14024704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 14024704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 14024704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 14024704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 14024704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 14024704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 14024704 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77217792 unmapped: 14016512 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77217792 unmapped: 14016512 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77217792 unmapped: 14016512 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77217792 unmapped: 14016512 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77217792 unmapped: 14016512 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77217792 unmapped: 14016512 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77217792 unmapped: 14016512 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77217792 unmapped: 14016512 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77217792 unmapped: 14016512 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77217792 unmapped: 14016512 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 14008320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 14008320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 14008320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 14008320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 14008320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 14008320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 14008320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 14008320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 14008320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 14008320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 14008320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 14008320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 14008320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 14008320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 14008320 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 14000128 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 14000128 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 14000128 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 14000128 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 14000128 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 14000128 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 14000128 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 14000128 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 14000128 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 14000128 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 14000128 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 14000128 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 14000128 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 14000128 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 14000128 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 14000128 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 14000128 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 14000128 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 14000128 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 14000128 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 13991936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 13991936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 13991936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 13991936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 13991936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 13991936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 13991936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 13991936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 13991936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 13991936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 13991936 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 13983744 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 13983744 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 13983744 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 13983744 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 13983744 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 13983744 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 13983744 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 13983744 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 13983744 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 13983744 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 13983744 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 13983744 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 13983744 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 13983744 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 13983744 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 13983744 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 13983744 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 13983744 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 13983744 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 13983744 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 13983744 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 13983744 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 13983744 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 13975552 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 13975552 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 13975552 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 13975552 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 13975552 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 13975552 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 13975552 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 13975552 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 13975552 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 13975552 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 13975552 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 13975552 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 13975552 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 13975552 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 13975552 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 13975552 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 13975552 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 13975552 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 13967360 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 13967360 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 13967360 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 13967360 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 13967360 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 13967360 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 13967360 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 13967360 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 14098432 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 14098432 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 14098432 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 14098432 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 14098432 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 14090240 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 14090240 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 14090240 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 14090240 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 14090240 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 14090240 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 14090240 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 14090240 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 14090240 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 14090240 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 14090240 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 14090240 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77144064 unmapped: 14090240 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 14082048 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 14082048 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 14082048 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 14082048 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 14082048 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 14082048 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 14082048 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 14082048 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 14082048 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 14082048 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 5828 writes, 23K keys, 5828 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5828 writes, 1121 syncs, 5.20 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1678 writes, 4312 keys, 1678 commit groups, 1.0 writes per commit group, ingest: 2.35 MB, 0.00 MB/s#012Interval WAL: 1678 writes, 755 syncs, 2.22 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 14082048 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 14082048 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 14073856 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 14073856 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 14073856 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 14073856 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: mgrc ms_handle_reset ms_handle_reset con 0x559f08df0000
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1553116858
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1553116858,v1:192.168.122.100:6801/1553116858]
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: mgrc handle_mgr_configure stats_period=5
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 13803520 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 13803520 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 13803520 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 13803520 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 13803520 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 13803520 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 13803520 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 13803520 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 13803520 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 13795328 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 13795328 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 13787136 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 13787136 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 13787136 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 13787136 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 13787136 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 13787136 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 13787136 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 13787136 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 13787136 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 13787136 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 13787136 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 13787136 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 13787136 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 13787136 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 13787136 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 13787136 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 13778944 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 13778944 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 13778944 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 13778944 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 13778944 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 13778944 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 13778944 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 13778944 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 13778944 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 13778944 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 13778944 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 13778944 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 13778944 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 13778944 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 13778944 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 13778944 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 13778944 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 13778944 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 13778944 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 13778944 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 13762560 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 13762560 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 13762560 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 13762560 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 13762560 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 13762560 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 13762560 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 13762560 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 13770752 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 13762560 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 13762560 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 13762560 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 13762560 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 13762560 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 13762560 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 13762560 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 13762560 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 13762560 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 13762560 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 13762560 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 13762560 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 13762560 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.15070 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 13762560 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 13762560 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 13762560 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 13754368 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 13754368 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 13754368 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 13754368 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 13754368 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 13754368 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 13754368 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 13754368 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 13754368 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 13754368 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 13754368 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 13746176 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 13746176 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 13746176 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 13746176 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 13746176 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 13746176 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 13746176 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 13746176 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 13746176 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 13746176 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 13746176 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 13746176 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 13746176 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 13746176 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 13746176 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 13737984 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 13729792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 13729792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 13729792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 13729792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 13729792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 13729792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 13729792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 13729792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 13729792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 13729792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 13729792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 13729792 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 13721600 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 13721600 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 13721600 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 13721600 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 13721600 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 13721600 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 13721600 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 13721600 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 13721600 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 13721600 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 13721600 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 13721600 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 13713408 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 13713408 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 13713408 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 13713408 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 13713408 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 13713408 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 13713408 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 13713408 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 13713408 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 13713408 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 13713408 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 13713408 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 13713408 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 13713408 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: do_command 'config diff' '{prefix=config diff}'
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: do_command 'config show' '{prefix=config show}'
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: do_command 'counter dump' '{prefix=counter dump}'
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: do_command 'counter schema' '{prefix=counter schema}'
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 13721600 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 13910016 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 703619 data_alloc: 218103808 data_used: 23313
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fc6ea000/0x0/0x4ffc00000, data 0x88944e/0x940000, compress 0x0/0x0/0x0, omap 0x14bb8, meta 0x2bbb448), peers [0,1] op hist [])
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 13901824 heap: 91234304 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:43 np0005544708 ceph-osd[88129]: do_command 'log dump' '{prefix=log dump}'
Dec  3 16:43:44 np0005544708 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  3 16:43:44 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec  3 16:43:44 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/647628077' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec  3 16:43:44 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.15074 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:43:44 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1162: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:43:44 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.15078 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:43:44 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec  3 16:43:44 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1918744001' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec  3 16:43:45 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  3 16:43:45 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.15082 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:43:45 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec  3 16:43:45 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1382647860' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec  3 16:43:45 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.15086 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:43:45 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  3 16:43:45 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1263793158' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec  3 16:43:46 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.15088 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec  3 16:43:46 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec  3 16:43:46 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/828817084' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec  3 16:43:46 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1163: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:43:46 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.15092 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  3 16:43:47 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Dec  3 16:43:47 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2530743118' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Dec  3 16:43:47 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.15096 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  3 16:43:47 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.15100 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  3 16:43:48 np0005544708 ceph-mon[75204]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Dec  3 16:43:48 np0005544708 ceph-mon[75204]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2129293945' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 1048576 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66052096 unmapped: 1048576 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 1040384 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 468059 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66068480 unmapped: 1032192 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 1024000 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.061655045s of 14.075113297s, submitted: 6
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66084864 unmapped: 1015808 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66084864 unmapped: 1015808 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66093056 unmapped: 1007616 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 472885 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66101248 unmapped: 999424 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66101248 unmapped: 999424 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66109440 unmapped: 991232 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66109440 unmapped: 991232 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66117632 unmapped: 983040 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 477709 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66125824 unmapped: 974848 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66125824 unmapped: 974848 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66134016 unmapped: 966656 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66134016 unmapped: 966656 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.014001846s of 12.029939651s, submitted: 8
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 942080 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 480122 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 942080 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66158592 unmapped: 942080 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.f scrub starts
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.f scrub ok
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66174976 unmapped: 925696 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 917504 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 917504 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 484946 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 909312 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66191360 unmapped: 909312 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66199552 unmapped: 901120 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 892928 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 884736 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 487359 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 884736 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.013473511s of 12.029477119s, submitted: 8
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66215936 unmapped: 884736 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66224128 unmapped: 876544 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66232320 unmapped: 868352 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66240512 unmapped: 860160 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 492183 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66248704 unmapped: 851968 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66256896 unmapped: 843776 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 835584 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66265088 unmapped: 835584 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.d scrub starts
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.d scrub ok
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 811008 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 497007 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 811008 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66289664 unmapped: 811008 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 802816 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.861497879s of 11.878818512s, submitted: 8
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 802816 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 794624 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501829 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 794624 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 794624 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 786432 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 786432 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 770048 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 501829 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 761856 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 753664 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 753664 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 753664 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.a scrub starts
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.959852219s of 11.973832130s, submitted: 6
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.a scrub ok
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 506651 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 712704 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 704512 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66404352 unmapped: 696320 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 506651 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 688128 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 679936 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 679936 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 671744 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 663552 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 506651 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 663552 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.087844849s of 12.090794563s, submitted: 2
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 647168 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 647168 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 638976 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 638976 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 509062 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 638976 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 630784 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 622592 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 598016 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 589824 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 518706 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 581632 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 573440 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.991565704s of 11.008629799s, submitted: 10
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 565248 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.f scrub starts
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.f scrub ok
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 557056 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 557056 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 523528 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 548864 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 532480 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 532480 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.c scrub starts
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.c scrub ok
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 524288 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 524288 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 528352 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 524288 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 491520 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 491520 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.000996590s of 11.020680428s, submitted: 10
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 483328 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 483328 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 535589 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 483328 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 466944 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 466944 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 458752 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 458752 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 540413 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 450560 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 442368 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.b scrub starts
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.b scrub ok
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 425984 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 409600 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.957039833s of 10.982520103s, submitted: 12
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 409600 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 550057 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 409600 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.d scrub starts
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.d scrub ok
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 401408 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.c scrub starts
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.c scrub ok
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 401408 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 393216 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 393216 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 554879 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 450560 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.e scrub starts
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: log_channel(cluster) log [DBG] : 6.e scrub ok
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 442368 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 442368 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 442368 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 434176 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 434176 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 425984 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 425984 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 425984 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 417792 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 417792 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 409600 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 409600 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 409600 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 401408 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 401408 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 401408 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 393216 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 393216 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 385024 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 385024 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 385024 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 376832 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 376832 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 368640 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 376832 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 368640 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 368640 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 368640 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 360448 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 360448 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 352256 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 344064 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 344064 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 344064 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 319488 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 319488 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 319488 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 311296 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 311296 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 303104 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 303104 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 294912 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 286720 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 286720 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 278528 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 278528 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 270336 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 270336 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 270336 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 262144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 262144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 262144 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 253952 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 253952 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 229376 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 229376 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 229376 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-mgr[75500]: log_channel(audit) log [DBG] : from='client.15104 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  3 16:43:48 np0005544708 ceph-mgr[75500]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  3 16:43:48 np0005544708 ceph-c21de27e-a7fd-594b-8324-0697ba9aab3a-mgr-compute-0-jxauqt[75496]: 2025-12-03T21:43:48.456+0000 7fa8c63a3640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 221184 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 221184 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 221184 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 212992 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 212992 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 204800 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 204800 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 204800 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 196608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 196608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 196608 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 188416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 188416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 188416 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 180224 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 180224 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 172032 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 155648 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 155648 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 147456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 147456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 147456 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 139264 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 139264 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 131072 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 131072 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 131072 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 122880 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 122880 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 114688 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 114688 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 106496 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 106496 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 106496 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 106496 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 98304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 98304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 65536 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 65536 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 57344 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 57344 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 57344 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 49152 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 49152 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 49152 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 40960 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 40960 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 32768 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 32768 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 32768 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 24576 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 24576 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 16384 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 16384 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 8192 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 8192 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 8192 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 0 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 0 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 0 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 1040384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 1040384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 1040384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 1032192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 1032192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 1024000 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 1024000 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 1024000 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 1015808 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 1015808 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 1007616 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 1007616 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 999424 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 991232 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 991232 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 983040 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 983040 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 983040 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 974848 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 974848 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 966656 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 966656 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 966656 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 958464 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 950272 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 942080 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 933888 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 925696 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 925696 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 917504 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 917504 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 917504 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 909312 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 909312 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 901120 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 901120 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 901120 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67256320 unmapped: 892928 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67256320 unmapped: 892928 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 884736 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 884736 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 884736 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67272704 unmapped: 876544 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67272704 unmapped: 876544 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67272704 unmapped: 876544 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 868352 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 868352 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 860160 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 860160 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 851968 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 851968 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 851968 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 843776 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 843776 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 843776 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 835584 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 835584 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 827392 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 827392 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 827392 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 819200 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 819200 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 811008 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 811008 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 802816 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 802816 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 802816 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 794624 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 794624 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 786432 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 786432 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 786432 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 778240 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 778240 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 770048 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 770048 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 770048 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 770048 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 761856 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 761856 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 753664 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 745472 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 745472 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 737280 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 737280 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 737280 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 729088 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 729088 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 729088 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67428352 unmapped: 720896 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67428352 unmapped: 720896 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 712704 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 712704 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 712704 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67444736 unmapped: 704512 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67444736 unmapped: 704512 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 696320 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 696320 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 696320 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 688128 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 688128 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 688128 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 679936 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 679936 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 671744 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 671744 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 663552 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 655360 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 647168 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 638976 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 638976 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 638976 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 630784 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 630784 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 630784 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 622592 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 614400 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 598016 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 598016 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 598016 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 598016 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 589824 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 589824 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 581632 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 581632 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67575808 unmapped: 573440 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67575808 unmapped: 573440 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67584000 unmapped: 565248 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67584000 unmapped: 565248 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67584000 unmapped: 565248 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 557056 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 557056 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 548864 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 548864 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 548864 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 540672 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 540672 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 540672 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 532480 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 540672 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 532480 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 532480 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 532480 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 524288 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 524288 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 516096 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 516096 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 507904 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 507904 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 507904 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67649536 unmapped: 499712 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67649536 unmapped: 499712 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67649536 unmapped: 499712 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 491520 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 491520 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 483328 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 483328 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 475136 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 475136 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 475136 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 466944 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 466944 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 458752 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 458752 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 458752 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 450560 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 450560 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 450560 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 442368 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 442368 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67715072 unmapped: 434176 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67715072 unmapped: 434176 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67715072 unmapped: 434176 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 425984 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 425984 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 425984 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 417792 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 417792 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67739648 unmapped: 409600 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67739648 unmapped: 409600 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67739648 unmapped: 409600 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67747840 unmapped: 401408 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67747840 unmapped: 401408 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 393216 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 393216 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 393216 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67764224 unmapped: 385024 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67764224 unmapped: 385024 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67772416 unmapped: 376832 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67772416 unmapped: 376832 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67780608 unmapped: 368640 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 4515 writes, 20K keys, 4515 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 4515 writes, 505 syncs, 8.94 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4515 writes, 20K keys, 4515 commit groups, 1.0 writes per commit group, ingest: 16.57 MB, 0.03 MB/s#012Interval WAL: 4515 writes, 505 syncs, 8.94 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cf1bed58d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cf1bed58d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67862528 unmapped: 286720 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 278528 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 270336 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 270336 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 270336 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 262144 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 262144 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 253952 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 253952 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 253952 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 245760 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 245760 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 237568 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 237568 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 237568 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 229376 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 229376 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 221184 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 221184 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 221184 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 212992 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 212992 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 204800 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 204800 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 204800 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 196608 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 196608 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 196608 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 188416 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 188416 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 180224 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 180224 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 180224 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 172032 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 172032 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 163840 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 163840 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 163840 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 155648 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 155648 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 147456 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 147456 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 139264 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 139264 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 131072 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 131072 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 131072 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 122880 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 122880 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 122880 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 114688 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 114688 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 114688 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 106496 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 106496 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 106496 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 98304 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 98304 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 90112 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 90112 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 81920 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 81920 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 81920 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 73728 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 73728 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 65536 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 65536 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 65536 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 57344 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 57344 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 49152 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 49152 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 49152 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 40960 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 40960 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 32768 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 32768 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 32768 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 24576 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 24576 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 24576 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 16384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 16384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 16384 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 8192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 8192 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 0 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 0 heap: 68149248 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 1040384 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 1040384 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 1040384 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 1032192 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 1032192 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 1024000 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 1024000 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 1024000 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 1015808 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 1015808 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68182016 unmapped: 1015808 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 1007616 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 1007616 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 999424 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 999424 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68198400 unmapped: 999424 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 991232 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 991232 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 983040 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 983040 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68222976 unmapped: 974848 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68222976 unmapped: 974848 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 958464 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 958464 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 958464 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 950272 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 950272 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 950272 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 942080 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 942080 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 942080 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 933888 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 925696 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 917504 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 909312 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 901120 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: mgrc ms_handle_reset ms_handle_reset con 0x55cf1d7fe000
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1553116858
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1553116858,v1:192.168.122.100:6801/1553116858]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: mgrc handle_mgr_configure stats_period=5
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68313088 unmapped: 884736 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 ms_handle_reset con 0x55cf1ee92400 session 0x55cf1e2d7880
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 ms_handle_reset con 0x55cf1ee92800 session 0x55cf1e2416c0
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 622592 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68517888 unmapped: 679936 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 671744 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 671744 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 671744 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 671744 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 671744 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 671744 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 671744 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 671744 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 671744 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 671744 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 671744 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 671744 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 671744 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 671744 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 671744 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 671744 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 671744 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 671744 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 671744 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 671744 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 671744 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 671744 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 671744 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 671744 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 671744 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 671744 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 663552 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 663552 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 663552 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 663552 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 4515 writes, 20K keys, 4515 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4515 writes, 505 syncs, 8.94 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cf1bed58d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55cf1bed58d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, i
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 655360 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 647168 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 647168 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 647168 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 647168 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 647168 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 647168 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 647168 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 647168 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 647168 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 557290 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 647168 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8057/0x11f000, compress 0x0/0x0/0x0, omap 0xad29, meta 0x1a252d7), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 647168 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 647168 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 647168 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 647168 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 62 handle_osd_map epochs [63,63], i have 62, src has [1,63]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 1025.044555664s of 1025.062866211s, submitted: 10
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 562450 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 491520 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0a7000/0x0/0x4ffc00000, data 0xb962c/0x123000, compress 0x0/0x0/0x0, omap 0xafb1, meta 0x1a2504f), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 63 handle_osd_map epochs [64,64], i have 63, src has [1,64]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 450560 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 64 handle_osd_map epochs [65,65], i have 64, src has [1,65]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 65 ms_handle_reset con 0x55cf1eea4000 session 0x55cf1fb84700
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 16900096 heap: 85983232 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 16883712 heap: 85983232 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 65 heartbeat osd_stat(store_statfs(0x4fd89b000/0x0/0x4ffc00000, data 0x8bc287/0x92b000, compress 0x0/0x0/0x0, omap 0xb4cd, meta 0x1a24b33), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 16883712 heap: 85983232 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 65 heartbeat osd_stat(store_statfs(0x4fd89b000/0x0/0x4ffc00000, data 0x8bc287/0x92b000, compress 0x0/0x0/0x0, omap 0xb4cd, meta 0x1a24b33), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 682603 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 65 handle_osd_map epochs [66,66], i have 65, src has [1,66]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 66 ms_handle_reset con 0x55cf1eea5000 session 0x55cf1fbcb6c0
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 23904256 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 23904256 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 23904256 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 23904256 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 23904256 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 66 heartbeat osd_stat(store_statfs(0x4fcc2a000/0x0/0x4ffc00000, data 0x152d8a3/0x15a0000, compress 0x0/0x0/0x0, omap 0xbaaf, meta 0x1a24551), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 685815 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 66 heartbeat osd_stat(store_statfs(0x4fcc2a000/0x0/0x4ffc00000, data 0x152d8a3/0x15a0000, compress 0x0/0x0/0x0, omap 0xbaaf, meta 0x1a24551), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 23904256 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 66 heartbeat osd_stat(store_statfs(0x4fcc2a000/0x0/0x4ffc00000, data 0x152d8a3/0x15a0000, compress 0x0/0x0/0x0, omap 0xbaaf, meta 0x1a24551), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 66 handle_osd_map epochs [66,67], i have 66, src has [1,67]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.130791664s of 11.549943924s, submitted: 54
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 23904256 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 23904256 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 23904256 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 23904256 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 688139 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 23904256 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 67 heartbeat osd_stat(store_statfs(0x4fcc27000/0x0/0x4ffc00000, data 0x152ed53/0x15a3000, compress 0x0/0x0/0x0, omap 0xbcdd, meta 0x1a24323), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 23904256 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 23904256 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 23904256 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 67 heartbeat osd_stat(store_statfs(0x4fcc27000/0x0/0x4ffc00000, data 0x152ed53/0x15a3000, compress 0x0/0x0/0x0, omap 0xbcdd, meta 0x1a24323), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 23904256 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 67 heartbeat osd_stat(store_statfs(0x4fcc27000/0x0/0x4ffc00000, data 0x152ed53/0x15a3000, compress 0x0/0x0/0x0, omap 0xbcdd, meta 0x1a24323), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 688139 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 23904256 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 23904256 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 23904256 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 23904256 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 67 heartbeat osd_stat(store_statfs(0x4fcc27000/0x0/0x4ffc00000, data 0x152ed53/0x15a3000, compress 0x0/0x0/0x0, omap 0xbcdd, meta 0x1a24323), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 23904256 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 67 heartbeat osd_stat(store_statfs(0x4fcc27000/0x0/0x4ffc00000, data 0x152ed53/0x15a3000, compress 0x0/0x0/0x0, omap 0xbcdd, meta 0x1a24323), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 688139 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 23904256 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 23904256 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 23904256 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 23904256 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 23904256 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 688139 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 23904256 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 67 heartbeat osd_stat(store_statfs(0x4fcc27000/0x0/0x4ffc00000, data 0x152ed53/0x15a3000, compress 0x0/0x0/0x0, omap 0xbcdd, meta 0x1a24323), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 23904256 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 23904256 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 23904256 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 23904256 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 688139 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 23904256 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 67 heartbeat osd_stat(store_statfs(0x4fcc27000/0x0/0x4ffc00000, data 0x152ed53/0x15a3000, compress 0x0/0x0/0x0, omap 0xbcdd, meta 0x1a24323), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 23904256 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 67 heartbeat osd_stat(store_statfs(0x4fcc27000/0x0/0x4ffc00000, data 0x152ed53/0x15a3000, compress 0x0/0x0/0x0, omap 0xbcdd, meta 0x1a24323), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 23904256 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 23904256 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 23904256 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 688139 data_alloc: 218103808 data_used: 658
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 23904256 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 23904256 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 67 heartbeat osd_stat(store_statfs(0x4fcc27000/0x0/0x4ffc00000, data 0x152ed53/0x15a3000, compress 0x0/0x0/0x0, omap 0xbcdd, meta 0x1a24323), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 23904256 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 31.202196121s of 31.212696075s, submitted: 13
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 67 handle_osd_map epochs [67,68], i have 67, src has [1,68]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 68 ms_handle_reset con 0x55cf1faa4400 session 0x55cf1f791a40
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 23699456 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 23699456 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fcc23000/0x0/0x4ffc00000, data 0x1530330/0x15a7000, compress 0x0/0x0/0x0, omap 0xc2b9, meta 0x1a23d47), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fcc23000/0x0/0x4ffc00000, data 0x1530330/0x15a7000, compress 0x0/0x0/0x0, omap 0xc2b9, meta 0x1a23d47), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 694334 data_alloc: 218103808 data_used: 677
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 23699456 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 22200320 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fc424000/0x0/0x4ffc00000, data 0x1d30353/0x1da8000, compress 0x0/0x0/0x0, omap 0xc2b9, meta 0x1a23d47), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 21954560 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 68 handle_osd_map epochs [69,69], i have 68, src has [1,69]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 69 ms_handle_reset con 0x55cf1fb24c00 session 0x55cf1fbe4e00
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 72441856 unmapped: 21938176 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 69 heartbeat osd_stat(store_statfs(0x4f9c1f000/0x0/0x4ffc00000, data 0x4531923/0x45ab000, compress 0x0/0x0/0x0, omap 0xc53b, meta 0x1a23ac5), peers [0,2] op hist [1])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 69 handle_osd_map epochs [70,70], i have 69, src has [1,70]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 70 ms_handle_reset con 0x55cf1e773c00 session 0x55cf1e2b2000
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 20766720 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 70 handle_osd_map epochs [71,71], i have 70, src has [1,71]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 71 ms_handle_reset con 0x55cf1eea4000 session 0x55cf1e2408c0
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 713122 data_alloc: 218103808 data_used: 677
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 71 ms_handle_reset con 0x55cf1eea5000 session 0x55cf1e704e00
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 20643840 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 71 ms_handle_reset con 0x55cf1faa4400 session 0x55cf1e704fc0
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 71 ms_handle_reset con 0x55cf1fadb400 session 0x55cf1da9a8c0
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 20463616 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 20332544 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 71 handle_osd_map epochs [72,72], i have 71, src has [1,72]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.184928894s of 10.620976448s, submitted: 112
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 72 ms_handle_reset con 0x55cf1fadfc00 session 0x55cf1f7916c0
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 20480000 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 72 ms_handle_reset con 0x55cf1eea4000 session 0x55cf1dc25c00
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 72 ms_handle_reset con 0x55cf1eea5000 session 0x55cf1dc24700
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 72 heartbeat osd_stat(store_statfs(0x4fcc15000/0x0/0x4ffc00000, data 0x1535f4b/0x15b5000, compress 0x0/0x0/0x0, omap 0xd0d5, meta 0x1a22f2b), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 20480000 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 72 handle_osd_map epochs [73,73], i have 72, src has [1,73]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 73 ms_handle_reset con 0x55cf1fdaa800 session 0x55cf1e3c1880
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 73 ms_handle_reset con 0x55cf1fadb400 session 0x55cf1d5af180
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 723385 data_alloc: 218103808 data_used: 677
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 20226048 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 73 heartbeat osd_stat(store_statfs(0x4fcc13000/0x0/0x4ffc00000, data 0x1537959/0x15b7000, compress 0x0/0x0/0x0, omap 0xd866, meta 0x1a2279a), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 73 handle_osd_map epochs [74,74], i have 73, src has [1,74]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 74 handle_osd_map epochs [74,75], i have 74, src has [1,75]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 75 ms_handle_reset con 0x55cf1fdd5400 session 0x55cf1fb44fc0
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 75 ms_handle_reset con 0x55cf1fdd5800 session 0x55cf1ee96a80
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 75 heartbeat osd_stat(store_statfs(0x4fcc13000/0x0/0x4ffc00000, data 0x1537959/0x15b7000, compress 0x0/0x0/0x0, omap 0xd866, meta 0x1a2279a), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 19742720 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 19693568 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 19693568 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 19693568 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 75 heartbeat osd_stat(store_statfs(0x4fcc11000/0x0/0x4ffc00000, data 0x1539c56/0x15bb000, compress 0x0/0x0/0x0, omap 0xdf34, meta 0x1a220cc), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 725390 data_alloc: 218103808 data_used: 677
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 19628032 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 75 handle_osd_map epochs [76,76], i have 75, src has [1,76]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 76 ms_handle_reset con 0x55cf1fdd5400 session 0x55cf1e2d6700
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 74760192 unmapped: 19619840 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 76 heartbeat osd_stat(store_statfs(0x4fba70000/0x0/0x4ffc00000, data 0x1539c79/0x15bc000, compress 0x0/0x0/0x0, omap 0xdf34, meta 0x2bc20cc), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 76 heartbeat osd_stat(store_statfs(0x4fba6b000/0x0/0x4ffc00000, data 0x153b262/0x15bf000, compress 0x0/0x0/0x0, omap 0xe1c1, meta 0x2bc1e3f), peers [0,2] op hist [0,0,0,0,0,0,1])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 19603456 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 76 handle_osd_map epochs [76,77], i have 76, src has [1,77]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.756548882s of 10.034585953s, submitted: 104
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 77 ms_handle_reset con 0x55cf1eea4000 session 0x55cf1fb85500
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 19603456 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 77 handle_osd_map epochs [78,78], i have 77, src has [1,78]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 78 ms_handle_reset con 0x55cf1fdd4c00 session 0x55cf1fb44540
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 19570688 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 78 handle_osd_map epochs [79,79], i have 78, src has [1,79]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 79 ms_handle_reset con 0x55cf1fdd4800 session 0x55cf1e2b3a40
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 747169 data_alloc: 218103808 data_used: 677
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 19562496 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 79 handle_osd_map epochs [80,80], i have 79, src has [1,80]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 80 ms_handle_reset con 0x55cf1eea4000 session 0x55cf1d865880
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 19562496 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 81 heartbeat osd_stat(store_statfs(0x4fba60000/0x0/0x4ffc00000, data 0x153fa30/0x15cc000, compress 0x0/0x0/0x0, omap 0xeb96, meta 0x2bc146a), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 81 handle_osd_map epochs [82,82], i have 81, src has [1,82]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 82 ms_handle_reset con 0x55cf1fdd5400 session 0x55cf1ee97a40
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 82 heartbeat osd_stat(store_statfs(0x4fba54000/0x0/0x4ffc00000, data 0x15424fc/0x15d4000, compress 0x0/0x0/0x0, omap 0xf4d9, meta 0x2bc0b27), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 18595840 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 18595840 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 82 heartbeat osd_stat(store_statfs(0x4fba4f000/0x0/0x4ffc00000, data 0x1543ac9/0x15d7000, compress 0x0/0x0/0x0, omap 0xf87b, meta 0x2bc0785), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 18595840 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 761402 data_alloc: 218103808 data_used: 677
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 18685952 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 82 handle_osd_map epochs [83,83], i have 82, src has [1,83]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 83 heartbeat osd_stat(store_statfs(0x4fba55000/0x0/0x4ffc00000, data 0x1543ac9/0x15d7000, compress 0x0/0x0/0x0, omap 0xfaa1, meta 0x2bc055f), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 83 ms_handle_reset con 0x55cf1fdd5800 session 0x55cf1e75f6c0
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 18546688 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 83 handle_osd_map epochs [84,84], i have 83, src has [1,84]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 84 ms_handle_reset con 0x55cf1fb24c00 session 0x55cf1e2b3a40
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 18382848 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 84 handle_osd_map epochs [85,85], i have 84, src has [1,85]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.886656761s of 10.113715172s, submitted: 149
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 85 ms_handle_reset con 0x55cf1e773000 session 0x55cf1fbcac40
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 18350080 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 85 heartbeat osd_stat(store_statfs(0x4fba4f000/0x0/0x4ffc00000, data 0x15466d8/0x15db000, compress 0x0/0x0/0x0, omap 0x105a5, meta 0x2bbfa5b), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 85 handle_osd_map epochs [86,86], i have 85, src has [1,86]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 86 ms_handle_reset con 0x55cf1eea4000 session 0x55cf1dc24c40
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 18268160 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 86 handle_osd_map epochs [87,87], i have 86, src has [1,87]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 770245 data_alloc: 218103808 data_used: 12860
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 87 ms_handle_reset con 0x55cf1fb24c00 session 0x55cf1f791a40
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 18145280 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 87 handle_osd_map epochs [88,88], i have 87, src has [1,88]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 88 ms_handle_reset con 0x55cf1fdd5400 session 0x55cf1ee96e00
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 88 heartbeat osd_stat(store_statfs(0x4fba49000/0x0/0x4ffc00000, data 0x154a328/0x15e0000, compress 0x0/0x0/0x0, omap 0x10d34, meta 0x2bbf2cc), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 18096128 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 18096128 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 88 heartbeat osd_stat(store_statfs(0x4fba44000/0x0/0x4ffc00000, data 0x154b926/0x15e2000, compress 0x0/0x0/0x0, omap 0x10fef, meta 0x2bbf011), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 18079744 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 18079744 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 772529 data_alloc: 218103808 data_used: 12860
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 18079744 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 88 handle_osd_map epochs [88,89], i have 88, src has [1,89]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 18202624 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 89 heartbeat osd_stat(store_statfs(0x4fba45000/0x0/0x4ffc00000, data 0x154cdf2/0x15e5000, compress 0x0/0x0/0x0, omap 0x1120b, meta 0x2bbedf5), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 18202624 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 18202624 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 18202624 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 774677 data_alloc: 218103808 data_used: 20982
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 18194432 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 89 handle_osd_map epochs [90,90], i have 89, src has [1,90]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.887969017s of 13.055091858s, submitted: 117
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 90 ms_handle_reset con 0x55cf1fdd5800 session 0x55cf1ee97340
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 18202624 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 18202624 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 90 heartbeat osd_stat(store_statfs(0x4fba41000/0x0/0x4ffc00000, data 0x154e41a/0x15e9000, compress 0x0/0x0/0x0, omap 0x11621, meta 0x2bbe9df), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 18202624 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 18202624 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 779189 data_alloc: 218103808 data_used: 20982
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 18202624 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 90 ms_handle_reset con 0x55cf1fdd4c00 session 0x55cf1fb85dc0
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 18055168 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 91 handle_osd_map epochs [92,92], i have 91, src has [1,92]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 92 ms_handle_reset con 0x55cf1fb24c00 session 0x55cf1fb841c0
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 92 ms_handle_reset con 0x55cf1eea4000 session 0x55cf1e2b2000
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 92 ms_handle_reset con 0x55cf1fdd5000 session 0x55cf1fbb0fc0
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 18014208 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 92 heartbeat osd_stat(store_statfs(0x4fba37000/0x0/0x4ffc00000, data 0x155105d/0x15f1000, compress 0x0/0x0/0x0, omap 0x11ad7, meta 0x2bbe529), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 18014208 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 18014208 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 790116 data_alloc: 218103808 data_used: 21017
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 18014208 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 18014208 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 18014208 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 18014208 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 92 heartbeat osd_stat(store_statfs(0x4fba37000/0x0/0x4ffc00000, data 0x155105d/0x15f1000, compress 0x0/0x0/0x0, omap 0x11ad7, meta 0x2bbe529), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 18014208 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 790116 data_alloc: 218103808 data_used: 21017
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 18014208 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 18014208 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 92 handle_osd_map epochs [93,93], i have 92, src has [1,93]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.561586380s of 15.613102913s, submitted: 36
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 76423168 unmapped: 17956864 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x155250d/0x15f4000, compress 0x0/0x0/0x0, omap 0x11d8c, meta 0x2bbe274), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 76423168 unmapped: 17956864 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 93 handle_osd_map epochs [95,95], i have 93, src has [1,95]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 93 handle_osd_map epochs [94,95], i have 93, src has [1,95]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 95 ms_handle_reset con 0x55cf1fdd5800 session 0x55cf1e2d61c0
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 17899520 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 800233 data_alloc: 218103808 data_used: 21017
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 17883136 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 95 heartbeat osd_stat(store_statfs(0x4fba2e000/0x0/0x4ffc00000, data 0x15550fb/0x15fa000, compress 0x0/0x0/0x0, omap 0x121e2, meta 0x2bbde1e), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 17883136 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 17883136 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 95 handle_osd_map epochs [96,96], i have 95, src has [1,96]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 96 ms_handle_reset con 0x55cf2001a800 session 0x55cf1ee96700
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 77725696 unmapped: 16654336 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 97 ms_handle_reset con 0x55cf1eea4000 session 0x55cf1e2408c0
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 16572416 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 97 ms_handle_reset con 0x55cf1fb24c00 session 0x55cf1ee97dc0
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810508 data_alloc: 218103808 data_used: 21083
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 16572416 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 97 ms_handle_reset con 0x55cf1fdd5000 session 0x55cf1fbca1c0
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 97 handle_osd_map epochs [97,98], i have 97, src has [1,98]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 98 ms_handle_reset con 0x55cf1fdd5800 session 0x55cf1fb45880
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 98 heartbeat osd_stat(store_statfs(0x4fba26000/0x0/0x4ffc00000, data 0x1557cfb/0x1602000, compress 0x0/0x0/0x0, omap 0x12956, meta 0x2bbd6aa), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 77725696 unmapped: 16654336 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.186281204s of 10.288720131s, submitted: 73
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 98 handle_osd_map epochs [99,99], i have 98, src has [1,99]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 99 ms_handle_reset con 0x55cf2001ac00 session 0x55cf1d5ae1c0
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 16564224 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 100 ms_handle_reset con 0x55cf1eea4000 session 0x55cf1fb85880
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 16547840 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 16547840 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 100 heartbeat osd_stat(store_statfs(0x4fba1c000/0x0/0x4ffc00000, data 0x155bf8b/0x160a000, compress 0x0/0x0/0x0, omap 0x13542, meta 0x2bbcabe), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 818980 data_alloc: 218103808 data_used: 21083
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 16547840 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 77856768 unmapped: 16523264 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 100 ms_handle_reset con 0x55cf1fb24c00 session 0x55cf1fb856c0
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 100 ms_handle_reset con 0x55cf1fdd5800 session 0x55cf1e2d7500
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 100 ms_handle_reset con 0x55cf1fdd5000 session 0x55cf1fb84380
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 100 ms_handle_reset con 0x55cf1f7a6800 session 0x55cf1fb85180
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 100 ms_handle_reset con 0x55cf1eea4000 session 0x55cf1fb84000
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 100 ms_handle_reset con 0x55cf1fadfc00 session 0x55cf1fb84540
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 100 ms_handle_reset con 0x55cf1f7fd400 session 0x55cf1ee976c0
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 16506880 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 100 ms_handle_reset con 0x55cf1d879c00 session 0x55cf1e75e540
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 16506880 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 100 ms_handle_reset con 0x55cf1e273000 session 0x55cf1fb5a000
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 100 ms_handle_reset con 0x55cf1d879c00 session 0x55cf1fb45180
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 100 ms_handle_reset con 0x55cf1eea5000 session 0x55cf1fb9a1c0
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 16203776 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 823271 data_alloc: 218103808 data_used: 21083
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 100 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x157ffdd/0x1630000, compress 0x0/0x0/0x0, omap 0x13542, meta 0x2bbcabe), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 16203776 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 100 handle_osd_map epochs [101,101], i have 100, src has [1,101]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 16203776 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 16203776 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 101 ms_handle_reset con 0x55cf1fadfc00 session 0x55cf1dc24380
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.026945114s of 11.134009361s, submitted: 77
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 102 ms_handle_reset con 0x55cf1faa4000 session 0x55cf1f848380
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 102 ms_handle_reset con 0x55cf1faa4400 session 0x55cf1e3c0c40
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 102 ms_handle_reset con 0x55cf1fdaa000 session 0x55cf1dc256c0
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 15613952 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 102 ms_handle_reset con 0x55cf1faa4400 session 0x55cf1d5aea80
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 102 handle_osd_map epochs [102,103], i have 102, src has [1,103]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 103 ms_handle_reset con 0x55cf1d879c00 session 0x55cf1fbca700
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78774272 unmapped: 15605760 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 103 heartbeat osd_stat(store_statfs(0x4fb9ed000/0x0/0x4ffc00000, data 0x15840c3/0x163a000, compress 0x0/0x0/0x0, omap 0x13c4f, meta 0x2bbc3b1), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835465 data_alloc: 218103808 data_used: 23197
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 103 handle_osd_map epochs [103,104], i have 103, src has [1,104]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 104 ms_handle_reset con 0x55cf1eea5000 session 0x55cf1ee97c00
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78774272 unmapped: 15605760 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 104 ms_handle_reset con 0x55cf1faa4000 session 0x55cf1e75fdc0
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 104 ms_handle_reset con 0x55cf1faa4000 session 0x55cf1f790c40
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 104 ms_handle_reset con 0x55cf1d879c00 session 0x55cf1da9b880
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78774272 unmapped: 15605760 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 104 heartbeat osd_stat(store_statfs(0x4fb9e8000/0x0/0x4ffc00000, data 0x15856ac/0x163d000, compress 0x0/0x0/0x0, omap 0x13f40, meta 0x2bbc0c0), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 104 ms_handle_reset con 0x55cf1eea5000 session 0x55cf1e2d7340
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78790656 unmapped: 15589376 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 104 handle_osd_map epochs [104,105], i have 105, src has [1,105]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 15556608 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 105 ms_handle_reset con 0x55cf1faa4400 session 0x55cf1da9b180
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 15556608 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 105 ms_handle_reset con 0x55cf1eea4000 session 0x55cf1e705500
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 105 ms_handle_reset con 0x55cf1f7fd400 session 0x55cf1d5af6c0
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 839964 data_alloc: 218103808 data_used: 24430
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 15671296 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 106 ms_handle_reset con 0x55cf1d879c00 session 0x55cf1fb44540
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 15663104 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba0e000/0x0/0x4ffc00000, data 0x15642c4/0x161c000, compress 0x0/0x0/0x0, omap 0x146e3, meta 0x2bbb91d), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 15663104 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 107 heartbeat osd_stat(store_statfs(0x4fba0b000/0x0/0x4ffc00000, data 0x1565790/0x161f000, compress 0x0/0x0/0x0, omap 0x149b4, meta 0x2bbb64c), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 15663104 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.730767250s of 10.873538971s, submitted: 112
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 107 ms_handle_reset con 0x55cf1fdd5400 session 0x55cf1e2b2540
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 107 ms_handle_reset con 0x55cf2001c000 session 0x55cf1ee96c40
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 107 ms_handle_reset con 0x55cf1fadfc00 session 0x55cf1fbb1500
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842323 data_alloc: 218103808 data_used: 22894
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 107 ms_handle_reset con 0x55cf1d879c00 session 0x55cf1d5afdc0
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 107 handle_osd_map epochs [108,108], i have 107, src has [1,108]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 108 ms_handle_reset con 0x55cf1fdaa400 session 0x55cf1d5ae700
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 108 heartbeat osd_stat(store_statfs(0x4fba0a000/0x0/0x4ffc00000, data 0x1566d94/0x1620000, compress 0x0/0x0/0x0, omap 0x1509e, meta 0x2bbaf62), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 843615 data_alloc: 218103808 data_used: 22793
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 108 heartbeat osd_stat(store_statfs(0x4fba0a000/0x0/0x4ffc00000, data 0x1566d94/0x1620000, compress 0x0/0x0/0x0, omap 0x1509e, meta 0x2bbaf62), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 846389 data_alloc: 218103808 data_used: 22793
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fba07000/0x0/0x4ffc00000, data 0x1568260/0x1623000, compress 0x0/0x0/0x0, omap 0x152b5, meta 0x2bbad4b), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 109 handle_osd_map epochs [110,110], i have 109, src has [1,110]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 109 handle_osd_map epochs [109,110], i have 110, src has [1,110]
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.690909386s of 12.290042877s, submitted: 67
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 110 heartbeat osd_stat(store_statfs(0x4fba04000/0x0/0x4ffc00000, data 0x1569710/0x1626000, compress 0x0/0x0/0x0, omap 0x154ce, meta 0x2bbab32), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 849163 data_alloc: 218103808 data_used: 22793
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 110 heartbeat osd_stat(store_statfs(0x4fba04000/0x0/0x4ffc00000, data 0x1569710/0x1626000, compress 0x0/0x0/0x0, omap 0x154ce, meta 0x2bbab32), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 849163 data_alloc: 218103808 data_used: 22793
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 110 heartbeat osd_stat(store_statfs(0x4fba04000/0x0/0x4ffc00000, data 0x1569710/0x1626000, compress 0x0/0x0/0x0, omap 0x154ce, meta 0x2bbab32), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 849163 data_alloc: 218103808 data_used: 22793
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 110 heartbeat osd_stat(store_statfs(0x4fba04000/0x0/0x4ffc00000, data 0x1569710/0x1626000, compress 0x0/0x0/0x0, omap 0x154ce, meta 0x2bbab32), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 110 heartbeat osd_stat(store_statfs(0x4fba04000/0x0/0x4ffc00000, data 0x1569710/0x1626000, compress 0x0/0x0/0x0, omap 0x154ce, meta 0x2bbab32), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 849163 data_alloc: 218103808 data_used: 22793
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 110 heartbeat osd_stat(store_statfs(0x4fba04000/0x0/0x4ffc00000, data 0x1569710/0x1626000, compress 0x0/0x0/0x0, omap 0x154ce, meta 0x2bbab32), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 110 heartbeat osd_stat(store_statfs(0x4fba04000/0x0/0x4ffc00000, data 0x1569710/0x1626000, compress 0x0/0x0/0x0, omap 0x154ce, meta 0x2bbab32), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 110 heartbeat osd_stat(store_statfs(0x4fba04000/0x0/0x4ffc00000, data 0x1569710/0x1626000, compress 0x0/0x0/0x0, omap 0x154ce, meta 0x2bbab32), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 849163 data_alloc: 218103808 data_used: 22793
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 110 heartbeat osd_stat(store_statfs(0x4fba04000/0x0/0x4ffc00000, data 0x1569710/0x1626000, compress 0x0/0x0/0x0, omap 0x154ce, meta 0x2bbab32), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 849163 data_alloc: 218103808 data_used: 22793
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 110 heartbeat osd_stat(store_statfs(0x4fba04000/0x0/0x4ffc00000, data 0x1569710/0x1626000, compress 0x0/0x0/0x0, omap 0x154ce, meta 0x2bbab32), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 849163 data_alloc: 218103808 data_used: 22793
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 110 heartbeat osd_stat(store_statfs(0x4fba04000/0x0/0x4ffc00000, data 0x1569710/0x1626000, compress 0x0/0x0/0x0, omap 0x154ce, meta 0x2bbab32), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 849163 data_alloc: 218103808 data_used: 22793
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 15761408 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 110 heartbeat osd_stat(store_statfs(0x4fba04000/0x0/0x4ffc00000, data 0x1569710/0x1626000, compress 0x0/0x0/0x0, omap 0x154ce, meta 0x2bbab32), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 15810560 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 849163 data_alloc: 218103808 data_used: 22793
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 15810560 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 15810560 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 15810560 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 110 heartbeat osd_stat(store_statfs(0x4fba04000/0x0/0x4ffc00000, data 0x1569710/0x1626000, compress 0x0/0x0/0x0, omap 0x154ce, meta 0x2bbab32), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 15810560 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 15810560 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 110 heartbeat osd_stat(store_statfs(0x4fba04000/0x0/0x4ffc00000, data 0x1569710/0x1626000, compress 0x0/0x0/0x0, omap 0x154ce, meta 0x2bbab32), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 849163 data_alloc: 218103808 data_used: 22793
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 15810560 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 15810560 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 15810560 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 15810560 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 110 heartbeat osd_stat(store_statfs(0x4fba04000/0x0/0x4ffc00000, data 0x1569710/0x1626000, compress 0x0/0x0/0x0, omap 0x154ce, meta 0x2bbab32), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 15810560 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 849163 data_alloc: 218103808 data_used: 22793
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 15810560 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 15810560 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-mgr[75500]: log_channel(cluster) log [DBG] : pgmap v1164: 177 pgs: 177 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 15810560 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 15810560 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 110 heartbeat osd_stat(store_statfs(0x4fba04000/0x0/0x4ffc00000, data 0x1569710/0x1626000, compress 0x0/0x0/0x0, omap 0x154ce, meta 0x2bbab32), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 15810560 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 110 heartbeat osd_stat(store_statfs(0x4fba04000/0x0/0x4ffc00000, data 0x1569710/0x1626000, compress 0x0/0x0/0x0, omap 0x154ce, meta 0x2bbab32), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 849163 data_alloc: 218103808 data_used: 22793
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 15810560 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 15810560 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 15810560 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 110 heartbeat osd_stat(store_statfs(0x4fba04000/0x0/0x4ffc00000, data 0x1569710/0x1626000, compress 0x0/0x0/0x0, omap 0x154ce, meta 0x2bbab32), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 15810560 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 15810560 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread fragmentation_score=0.000123 took=0.000012s
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 849163 data_alloc: 218103808 data_used: 22793
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 15810560 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 15818752 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 15818752 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 15818752 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 110 heartbeat osd_stat(store_statfs(0x4fba04000/0x0/0x4ffc00000, data 0x1569710/0x1626000, compress 0x0/0x0/0x0, omap 0x154ce, meta 0x2bbab32), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 15818752 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 849163 data_alloc: 218103808 data_used: 22793
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 15818752 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 110 heartbeat osd_stat(store_statfs(0x4fba04000/0x0/0x4ffc00000, data 0x1569710/0x1626000, compress 0x0/0x0/0x0, omap 0x154ce, meta 0x2bbab32), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 15818752 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 15818752 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 15818752 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 15818752 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 849163 data_alloc: 218103808 data_used: 22793
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 15818752 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 110 heartbeat osd_stat(store_statfs(0x4fba04000/0x0/0x4ffc00000, data 0x1569710/0x1626000, compress 0x0/0x0/0x0, omap 0x154ce, meta 0x2bbab32), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 15818752 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 15818752 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 15818752 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 15818752 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 849163 data_alloc: 218103808 data_used: 22793
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 15818752 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 110 heartbeat osd_stat(store_statfs(0x4fba04000/0x0/0x4ffc00000, data 0x1569710/0x1626000, compress 0x0/0x0/0x0, omap 0x154ce, meta 0x2bbab32), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 15818752 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 15818752 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 15745024 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: do_command 'config diff' '{prefix=config diff}'
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: do_command 'config show' '{prefix=config show}'
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: do_command 'counter dump' '{prefix=counter dump}'
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 15548416 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: do_command 'counter schema' '{prefix=counter schema}'
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 849163 data_alloc: 218103808 data_used: 22793
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15343616 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 15343616 heap: 94380032 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: do_command 'log dump' '{prefix=log dump}'
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 110 heartbeat osd_stat(store_statfs(0x4fba04000/0x0/0x4ffc00000, data 0x1569710/0x1626000, compress 0x0/0x0/0x0, omap 0x154ce, meta 0x2bbab32), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 26386432 heap: 105422848 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: do_command 'perf dump' '{prefix=perf dump}'
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: do_command 'perf schema' '{prefix=perf schema}'
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 79110144 unmapped: 26312704 heap: 105422848 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 79110144 unmapped: 26312704 heap: 105422848 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 849163 data_alloc: 218103808 data_used: 22793
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 79110144 unmapped: 26312704 heap: 105422848 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 79110144 unmapped: 26312704 heap: 105422848 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 79110144 unmapped: 26312704 heap: 105422848 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 110 heartbeat osd_stat(store_statfs(0x4fba04000/0x0/0x4ffc00000, data 0x1569710/0x1626000, compress 0x0/0x0/0x0, omap 0x154ce, meta 0x2bbab32), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 79110144 unmapped: 26312704 heap: 105422848 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 79110144 unmapped: 26312704 heap: 105422848 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 849163 data_alloc: 218103808 data_used: 22793
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 79110144 unmapped: 26312704 heap: 105422848 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 110 heartbeat osd_stat(store_statfs(0x4fba04000/0x0/0x4ffc00000, data 0x1569710/0x1626000, compress 0x0/0x0/0x0, omap 0x154ce, meta 0x2bbab32), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 79110144 unmapped: 26312704 heap: 105422848 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 79110144 unmapped: 26312704 heap: 105422848 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 79110144 unmapped: 26312704 heap: 105422848 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 79110144 unmapped: 26312704 heap: 105422848 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 849163 data_alloc: 218103808 data_used: 22793
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 79110144 unmapped: 26312704 heap: 105422848 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 79110144 unmapped: 26312704 heap: 105422848 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 110 heartbeat osd_stat(store_statfs(0x4fba04000/0x0/0x4ffc00000, data 0x1569710/0x1626000, compress 0x0/0x0/0x0, omap 0x154ce, meta 0x2bbab32), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 79110144 unmapped: 26312704 heap: 105422848 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 79110144 unmapped: 26312704 heap: 105422848 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 79110144 unmapped: 26312704 heap: 105422848 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 849163 data_alloc: 218103808 data_used: 22793
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 79110144 unmapped: 26312704 heap: 105422848 old mem: 2845415832 new mem: 2845415832
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: osd.1 110 heartbeat osd_stat(store_statfs(0x4fba04000/0x0/0x4ffc00000, data 0x1569710/0x1626000, compress 0x0/0x0/0x0, omap 0x154ce, meta 0x2bbab32), peers [0,2] op hist [])
Dec  3 16:43:48 np0005544708 ceph-osd[87094]: prioritycache tune_memory target: 4294967296 mapped: 79110144 unmapped: 26312704 heap: 105422848 old mem: 2845415832 new mem: 2845415832
